Test Report: KVM_Linux_crio 20052

                    
                      8d1e3f592e1f661c71a144f8266060bd168d3f35:2024-12-05:37356
                    
                

Test fail (33/311)

Order failed test Duration
36 TestAddons/parallel/Ingress 156.95
38 TestAddons/parallel/MetricsServer 329.56
47 TestAddons/StoppedEnableDisable 154.45
166 TestMultiControlPlane/serial/StopSecondaryNode 141.83
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.8
168 TestMultiControlPlane/serial/RestartSecondaryNode 6.63
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.41
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 360.34
173 TestMultiControlPlane/serial/StopCluster 142.2
174 TestMultiControlPlane/serial/RestartCluster 836.8
230 TestMultiNode/serial/RestartKeepsNodes 327.1
232 TestMultiNode/serial/StopMultiNode 145.65
239 TestPreload 177.08
247 TestKubernetesUpgrade 443.92
264 TestPause/serial/SecondStartNoReconfiguration 48.84
284 TestStartStop/group/old-k8s-version/serial/FirstStart 332.53
292 TestStartStop/group/no-preload/serial/Stop 139.18
294 TestStartStop/group/embed-certs/serial/Stop 139.03
297 TestStartStop/group/old-k8s-version/serial/DeployApp 0.52
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 111.26
301 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.21
302 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
303 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
308 TestStartStop/group/old-k8s-version/serial/SecondStart 726.48
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.42
311 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.57
312 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.51
313 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.43
314 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.55
315 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 397.95
316 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 488.69
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 285.9
318 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 85.58
x
+
TestAddons/parallel/Ingress (156.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-396564 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-396564 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-396564 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [8e772fa6-e5dd-49b1-a470-bdca82384b0b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [8e772fa6-e5dd-49b1-a470-bdca82384b0b] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004062333s
I1205 19:06:53.515742  538186 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-396564 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-396564 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.273491141s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-396564 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-396564 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.9
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-396564 -n addons-396564
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-396564 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-396564 logs -n 25: (1.345285419s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 05 Dec 24 19:02 UTC | 05 Dec 24 19:02 UTC |
	| delete  | -p download-only-765744                                                                     | download-only-765744 | jenkins | v1.34.0 | 05 Dec 24 19:02 UTC | 05 Dec 24 19:02 UTC |
	| delete  | -p download-only-196484                                                                     | download-only-196484 | jenkins | v1.34.0 | 05 Dec 24 19:02 UTC | 05 Dec 24 19:02 UTC |
	| delete  | -p download-only-765744                                                                     | download-only-765744 | jenkins | v1.34.0 | 05 Dec 24 19:02 UTC | 05 Dec 24 19:02 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-199569 | jenkins | v1.34.0 | 05 Dec 24 19:02 UTC |                     |
	|         | binary-mirror-199569                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46195                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-199569                                                                     | binary-mirror-199569 | jenkins | v1.34.0 | 05 Dec 24 19:02 UTC | 05 Dec 24 19:02 UTC |
	| addons  | enable dashboard -p                                                                         | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:02 UTC |                     |
	|         | addons-396564                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:02 UTC |                     |
	|         | addons-396564                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-396564 --wait=true                                                                | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:02 UTC | 05 Dec 24 19:05 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-396564 addons disable                                                                | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:05 UTC | 05 Dec 24 19:05 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-396564 addons disable                                                                | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | -p addons-396564                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-396564 addons                                                                        | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-396564 addons disable                                                                | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-396564 addons disable                                                                | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-396564 ip                                                                            | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	| addons  | addons-396564 addons disable                                                                | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-396564 addons                                                                        | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-396564 ssh cat                                                                       | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | /opt/local-path-provisioner/pvc-41b3db4e-7b14-4edb-9a67-ba393129c596_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-396564 addons disable                                                                | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:07 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-396564 addons                                                                        | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-396564 ssh curl -s                                                                   | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-396564 addons                                                                        | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:07 UTC | 05 Dec 24 19:07 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-396564 addons                                                                        | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:07 UTC | 05 Dec 24 19:07 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-396564 ip                                                                            | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:09 UTC | 05 Dec 24 19:09 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 19:02:30
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:02:30.955385  538905 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:02:30.955633  538905 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:02:30.955641  538905 out.go:358] Setting ErrFile to fd 2...
	I1205 19:02:30.955645  538905 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:02:30.955806  538905 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 19:02:30.956507  538905 out.go:352] Setting JSON to false
	I1205 19:02:30.957508  538905 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":6297,"bootTime":1733419054,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:02:30.957618  538905 start.go:139] virtualization: kvm guest
	I1205 19:02:30.959863  538905 out.go:177] * [addons-396564] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:02:30.961345  538905 notify.go:220] Checking for updates...
	I1205 19:02:30.961366  538905 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 19:02:30.962956  538905 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:02:30.964562  538905 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:02:30.966034  538905 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:02:30.967498  538905 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:02:30.968985  538905 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:02:30.970711  538905 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 19:02:31.003531  538905 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 19:02:31.005105  538905 start.go:297] selected driver: kvm2
	I1205 19:02:31.005127  538905 start.go:901] validating driver "kvm2" against <nil>
	I1205 19:02:31.005146  538905 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:02:31.005915  538905 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:02:31.006031  538905 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20052-530897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 19:02:31.022290  538905 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 19:02:31.022354  538905 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 19:02:31.022614  538905 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:02:31.022646  538905 cni.go:84] Creating CNI manager for ""
	I1205 19:02:31.022689  538905 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 19:02:31.022699  538905 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 19:02:31.022757  538905 start.go:340] cluster config:
	{Name:addons-396564 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-396564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:02:31.022880  538905 iso.go:125] acquiring lock: {Name:mk778929df466edaca8cb6d38427acedfae32b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:02:31.024711  538905 out.go:177] * Starting "addons-396564" primary control-plane node in "addons-396564" cluster
	I1205 19:02:31.026081  538905 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:02:31.026118  538905 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 19:02:31.026130  538905 cache.go:56] Caching tarball of preloaded images
	I1205 19:02:31.026215  538905 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 19:02:31.026225  538905 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 19:02:31.026655  538905 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/config.json ...
	I1205 19:02:31.026695  538905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/config.json: {Name:mk077ee5da67ce1e15bac4e6e2cfc85d4920c391 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:02:31.026871  538905 start.go:360] acquireMachinesLock for addons-396564: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 19:02:31.026936  538905 start.go:364] duration metric: took 47.419µs to acquireMachinesLock for "addons-396564"
	I1205 19:02:31.026959  538905 start.go:93] Provisioning new machine with config: &{Name:addons-396564 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-396564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:02:31.027053  538905 start.go:125] createHost starting for "" (driver="kvm2")
	I1205 19:02:31.028890  538905 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1205 19:02:31.029049  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:02:31.029092  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:02:31.044420  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45495
	I1205 19:02:31.044946  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:02:31.045522  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:02:31.045547  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:02:31.045971  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:02:31.046148  538905 main.go:141] libmachine: (addons-396564) Calling .GetMachineName
	I1205 19:02:31.046326  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:02:31.046512  538905 start.go:159] libmachine.API.Create for "addons-396564" (driver="kvm2")
	I1205 19:02:31.046553  538905 client.go:168] LocalClient.Create starting
	I1205 19:02:31.046599  538905 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem
	I1205 19:02:31.280827  538905 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem
	I1205 19:02:31.354195  538905 main.go:141] libmachine: Running pre-create checks...
	I1205 19:02:31.354222  538905 main.go:141] libmachine: (addons-396564) Calling .PreCreateCheck
	I1205 19:02:31.354845  538905 main.go:141] libmachine: (addons-396564) Calling .GetConfigRaw
	I1205 19:02:31.355996  538905 main.go:141] libmachine: Creating machine...
	I1205 19:02:31.356037  538905 main.go:141] libmachine: (addons-396564) Calling .Create
	I1205 19:02:31.356960  538905 main.go:141] libmachine: (addons-396564) Creating KVM machine...
	I1205 19:02:31.358193  538905 main.go:141] libmachine: (addons-396564) DBG | found existing default KVM network
	I1205 19:02:31.359163  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:31.358996  538927 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123a50}
	I1205 19:02:31.359332  538905 main.go:141] libmachine: (addons-396564) DBG | created network xml: 
	I1205 19:02:31.359353  538905 main.go:141] libmachine: (addons-396564) DBG | <network>
	I1205 19:02:31.359364  538905 main.go:141] libmachine: (addons-396564) DBG |   <name>mk-addons-396564</name>
	I1205 19:02:31.359371  538905 main.go:141] libmachine: (addons-396564) DBG |   <dns enable='no'/>
	I1205 19:02:31.359383  538905 main.go:141] libmachine: (addons-396564) DBG |   
	I1205 19:02:31.359392  538905 main.go:141] libmachine: (addons-396564) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1205 19:02:31.359401  538905 main.go:141] libmachine: (addons-396564) DBG |     <dhcp>
	I1205 19:02:31.359409  538905 main.go:141] libmachine: (addons-396564) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1205 19:02:31.359414  538905 main.go:141] libmachine: (addons-396564) DBG |     </dhcp>
	I1205 19:02:31.359421  538905 main.go:141] libmachine: (addons-396564) DBG |   </ip>
	I1205 19:02:31.359426  538905 main.go:141] libmachine: (addons-396564) DBG |   
	I1205 19:02:31.359433  538905 main.go:141] libmachine: (addons-396564) DBG | </network>
	I1205 19:02:31.359496  538905 main.go:141] libmachine: (addons-396564) DBG | 
	I1205 19:02:31.365746  538905 main.go:141] libmachine: (addons-396564) DBG | trying to create private KVM network mk-addons-396564 192.168.39.0/24...
	I1205 19:02:31.431835  538905 main.go:141] libmachine: (addons-396564) Setting up store path in /home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564 ...
	I1205 19:02:31.431867  538905 main.go:141] libmachine: (addons-396564) DBG | private KVM network mk-addons-396564 192.168.39.0/24 created
	I1205 19:02:31.431883  538905 main.go:141] libmachine: (addons-396564) Building disk image from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 19:02:31.431908  538905 main.go:141] libmachine: (addons-396564) Downloading /home/jenkins/minikube-integration/20052-530897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 19:02:31.431965  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:31.431759  538927 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:02:31.729814  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:31.729662  538927 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa...
	I1205 19:02:31.803910  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:31.803745  538927 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/addons-396564.rawdisk...
	I1205 19:02:31.803943  538905 main.go:141] libmachine: (addons-396564) DBG | Writing magic tar header
	I1205 19:02:31.803958  538905 main.go:141] libmachine: (addons-396564) DBG | Writing SSH key tar header
	I1205 19:02:31.803977  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:31.803883  538927 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564 ...
	I1205 19:02:31.803990  538905 main.go:141] libmachine: (addons-396564) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564
	I1205 19:02:31.804069  538905 main.go:141] libmachine: (addons-396564) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564 (perms=drwx------)
	I1205 19:02:31.804097  538905 main.go:141] libmachine: (addons-396564) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines
	I1205 19:02:31.804106  538905 main.go:141] libmachine: (addons-396564) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines (perms=drwxr-xr-x)
	I1205 19:02:31.804118  538905 main.go:141] libmachine: (addons-396564) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube (perms=drwxr-xr-x)
	I1205 19:02:31.804124  538905 main.go:141] libmachine: (addons-396564) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897 (perms=drwxrwxr-x)
	I1205 19:02:31.804131  538905 main.go:141] libmachine: (addons-396564) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 19:02:31.804137  538905 main.go:141] libmachine: (addons-396564) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 19:02:31.804148  538905 main.go:141] libmachine: (addons-396564) Creating domain...
	I1205 19:02:31.804161  538905 main.go:141] libmachine: (addons-396564) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:02:31.804170  538905 main.go:141] libmachine: (addons-396564) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897
	I1205 19:02:31.804183  538905 main.go:141] libmachine: (addons-396564) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 19:02:31.804202  538905 main.go:141] libmachine: (addons-396564) DBG | Checking permissions on dir: /home/jenkins
	I1205 19:02:31.804211  538905 main.go:141] libmachine: (addons-396564) DBG | Checking permissions on dir: /home
	I1205 19:02:31.804219  538905 main.go:141] libmachine: (addons-396564) DBG | Skipping /home - not owner
	I1205 19:02:31.805271  538905 main.go:141] libmachine: (addons-396564) define libvirt domain using xml: 
	I1205 19:02:31.805294  538905 main.go:141] libmachine: (addons-396564) <domain type='kvm'>
	I1205 19:02:31.805304  538905 main.go:141] libmachine: (addons-396564)   <name>addons-396564</name>
	I1205 19:02:31.805312  538905 main.go:141] libmachine: (addons-396564)   <memory unit='MiB'>4000</memory>
	I1205 19:02:31.805333  538905 main.go:141] libmachine: (addons-396564)   <vcpu>2</vcpu>
	I1205 19:02:31.805343  538905 main.go:141] libmachine: (addons-396564)   <features>
	I1205 19:02:31.805352  538905 main.go:141] libmachine: (addons-396564)     <acpi/>
	I1205 19:02:31.805359  538905 main.go:141] libmachine: (addons-396564)     <apic/>
	I1205 19:02:31.805368  538905 main.go:141] libmachine: (addons-396564)     <pae/>
	I1205 19:02:31.805378  538905 main.go:141] libmachine: (addons-396564)     
	I1205 19:02:31.805386  538905 main.go:141] libmachine: (addons-396564)   </features>
	I1205 19:02:31.805395  538905 main.go:141] libmachine: (addons-396564)   <cpu mode='host-passthrough'>
	I1205 19:02:31.805401  538905 main.go:141] libmachine: (addons-396564)   
	I1205 19:02:31.805411  538905 main.go:141] libmachine: (addons-396564)   </cpu>
	I1205 19:02:31.805422  538905 main.go:141] libmachine: (addons-396564)   <os>
	I1205 19:02:31.805433  538905 main.go:141] libmachine: (addons-396564)     <type>hvm</type>
	I1205 19:02:31.805445  538905 main.go:141] libmachine: (addons-396564)     <boot dev='cdrom'/>
	I1205 19:02:31.805451  538905 main.go:141] libmachine: (addons-396564)     <boot dev='hd'/>
	I1205 19:02:31.805457  538905 main.go:141] libmachine: (addons-396564)     <bootmenu enable='no'/>
	I1205 19:02:31.805461  538905 main.go:141] libmachine: (addons-396564)   </os>
	I1205 19:02:31.805466  538905 main.go:141] libmachine: (addons-396564)   <devices>
	I1205 19:02:31.805472  538905 main.go:141] libmachine: (addons-396564)     <disk type='file' device='cdrom'>
	I1205 19:02:31.805482  538905 main.go:141] libmachine: (addons-396564)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/boot2docker.iso'/>
	I1205 19:02:31.805491  538905 main.go:141] libmachine: (addons-396564)       <target dev='hdc' bus='scsi'/>
	I1205 19:02:31.805496  538905 main.go:141] libmachine: (addons-396564)       <readonly/>
	I1205 19:02:31.805500  538905 main.go:141] libmachine: (addons-396564)     </disk>
	I1205 19:02:31.805537  538905 main.go:141] libmachine: (addons-396564)     <disk type='file' device='disk'>
	I1205 19:02:31.805565  538905 main.go:141] libmachine: (addons-396564)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 19:02:31.805587  538905 main.go:141] libmachine: (addons-396564)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/addons-396564.rawdisk'/>
	I1205 19:02:31.805599  538905 main.go:141] libmachine: (addons-396564)       <target dev='hda' bus='virtio'/>
	I1205 19:02:31.805608  538905 main.go:141] libmachine: (addons-396564)     </disk>
	I1205 19:02:31.805616  538905 main.go:141] libmachine: (addons-396564)     <interface type='network'>
	I1205 19:02:31.805627  538905 main.go:141] libmachine: (addons-396564)       <source network='mk-addons-396564'/>
	I1205 19:02:31.805639  538905 main.go:141] libmachine: (addons-396564)       <model type='virtio'/>
	I1205 19:02:31.805650  538905 main.go:141] libmachine: (addons-396564)     </interface>
	I1205 19:02:31.805661  538905 main.go:141] libmachine: (addons-396564)     <interface type='network'>
	I1205 19:02:31.805671  538905 main.go:141] libmachine: (addons-396564)       <source network='default'/>
	I1205 19:02:31.805685  538905 main.go:141] libmachine: (addons-396564)       <model type='virtio'/>
	I1205 19:02:31.805720  538905 main.go:141] libmachine: (addons-396564)     </interface>
	I1205 19:02:31.805750  538905 main.go:141] libmachine: (addons-396564)     <serial type='pty'>
	I1205 19:02:31.805766  538905 main.go:141] libmachine: (addons-396564)       <target port='0'/>
	I1205 19:02:31.805778  538905 main.go:141] libmachine: (addons-396564)     </serial>
	I1205 19:02:31.805793  538905 main.go:141] libmachine: (addons-396564)     <console type='pty'>
	I1205 19:02:31.805806  538905 main.go:141] libmachine: (addons-396564)       <target type='serial' port='0'/>
	I1205 19:02:31.805835  538905 main.go:141] libmachine: (addons-396564)     </console>
	I1205 19:02:31.805852  538905 main.go:141] libmachine: (addons-396564)     <rng model='virtio'>
	I1205 19:02:31.805863  538905 main.go:141] libmachine: (addons-396564)       <backend model='random'>/dev/random</backend>
	I1205 19:02:31.805883  538905 main.go:141] libmachine: (addons-396564)     </rng>
	I1205 19:02:31.805901  538905 main.go:141] libmachine: (addons-396564)     
	I1205 19:02:31.805920  538905 main.go:141] libmachine: (addons-396564)     
	I1205 19:02:31.805932  538905 main.go:141] libmachine: (addons-396564)   </devices>
	I1205 19:02:31.805939  538905 main.go:141] libmachine: (addons-396564) </domain>
	I1205 19:02:31.805956  538905 main.go:141] libmachine: (addons-396564) 
	I1205 19:02:31.813231  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:3e:46:6d in network default
	I1205 19:02:31.813848  538905 main.go:141] libmachine: (addons-396564) Ensuring networks are active...
	I1205 19:02:31.813871  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:31.814567  538905 main.go:141] libmachine: (addons-396564) Ensuring network default is active
	I1205 19:02:31.815030  538905 main.go:141] libmachine: (addons-396564) Ensuring network mk-addons-396564 is active
	I1205 19:02:31.816632  538905 main.go:141] libmachine: (addons-396564) Getting domain xml...
	I1205 19:02:31.817402  538905 main.go:141] libmachine: (addons-396564) Creating domain...
	I1205 19:02:33.253599  538905 main.go:141] libmachine: (addons-396564) Waiting to get IP...
	I1205 19:02:33.254373  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:33.254752  538905 main.go:141] libmachine: (addons-396564) DBG | unable to find current IP address of domain addons-396564 in network mk-addons-396564
	I1205 19:02:33.254789  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:33.254731  538927 retry.go:31] will retry after 280.930998ms: waiting for machine to come up
	I1205 19:02:33.537487  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:33.537910  538905 main.go:141] libmachine: (addons-396564) DBG | unable to find current IP address of domain addons-396564 in network mk-addons-396564
	I1205 19:02:33.537941  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:33.537868  538927 retry.go:31] will retry after 259.854298ms: waiting for machine to come up
	I1205 19:02:33.799485  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:33.799931  538905 main.go:141] libmachine: (addons-396564) DBG | unable to find current IP address of domain addons-396564 in network mk-addons-396564
	I1205 19:02:33.799959  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:33.799869  538927 retry.go:31] will retry after 398.375805ms: waiting for machine to come up
	I1205 19:02:34.199531  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:34.199933  538905 main.go:141] libmachine: (addons-396564) DBG | unable to find current IP address of domain addons-396564 in network mk-addons-396564
	I1205 19:02:34.199985  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:34.199918  538927 retry.go:31] will retry after 607.832689ms: waiting for machine to come up
	I1205 19:02:34.809790  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:34.810215  538905 main.go:141] libmachine: (addons-396564) DBG | unable to find current IP address of domain addons-396564 in network mk-addons-396564
	I1205 19:02:34.810239  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:34.810180  538927 retry.go:31] will retry after 562.585715ms: waiting for machine to come up
	I1205 19:02:35.374055  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:35.374564  538905 main.go:141] libmachine: (addons-396564) DBG | unable to find current IP address of domain addons-396564 in network mk-addons-396564
	I1205 19:02:35.374592  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:35.374507  538927 retry.go:31] will retry after 628.854692ms: waiting for machine to come up
	I1205 19:02:36.005446  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:36.005860  538905 main.go:141] libmachine: (addons-396564) DBG | unable to find current IP address of domain addons-396564 in network mk-addons-396564
	I1205 19:02:36.005893  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:36.005814  538927 retry.go:31] will retry after 1.039428653s: waiting for machine to come up
	I1205 19:02:37.046770  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:37.047259  538905 main.go:141] libmachine: (addons-396564) DBG | unable to find current IP address of domain addons-396564 in network mk-addons-396564
	I1205 19:02:37.047290  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:37.047215  538927 retry.go:31] will retry after 971.053342ms: waiting for machine to come up
	I1205 19:02:38.019641  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:38.020069  538905 main.go:141] libmachine: (addons-396564) DBG | unable to find current IP address of domain addons-396564 in network mk-addons-396564
	I1205 19:02:38.020093  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:38.020010  538927 retry.go:31] will retry after 1.410662317s: waiting for machine to come up
	I1205 19:02:39.432627  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:39.433098  538905 main.go:141] libmachine: (addons-396564) DBG | unable to find current IP address of domain addons-396564 in network mk-addons-396564
	I1205 19:02:39.433123  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:39.433042  538927 retry.go:31] will retry after 1.497979927s: waiting for machine to come up
	I1205 19:02:40.933032  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:40.933435  538905 main.go:141] libmachine: (addons-396564) DBG | unable to find current IP address of domain addons-396564 in network mk-addons-396564
	I1205 19:02:40.933481  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:40.933426  538927 retry.go:31] will retry after 2.733921879s: waiting for machine to come up
	I1205 19:02:43.669442  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:43.669835  538905 main.go:141] libmachine: (addons-396564) DBG | unable to find current IP address of domain addons-396564 in network mk-addons-396564
	I1205 19:02:43.669869  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:43.669776  538927 retry.go:31] will retry after 3.113935772s: waiting for machine to come up
	I1205 19:02:46.785658  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:46.786068  538905 main.go:141] libmachine: (addons-396564) DBG | unable to find current IP address of domain addons-396564 in network mk-addons-396564
	I1205 19:02:46.786112  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:46.785992  538927 retry.go:31] will retry after 3.769972558s: waiting for machine to come up
	I1205 19:02:50.559967  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:50.560354  538905 main.go:141] libmachine: (addons-396564) DBG | unable to find current IP address of domain addons-396564 in network mk-addons-396564
	I1205 19:02:50.560379  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:50.560306  538927 retry.go:31] will retry after 3.65413274s: waiting for machine to come up
	I1205 19:02:54.217489  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.217869  538905 main.go:141] libmachine: (addons-396564) Found IP for machine: 192.168.39.9
	I1205 19:02:54.217902  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has current primary IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.217911  538905 main.go:141] libmachine: (addons-396564) Reserving static IP address...
	I1205 19:02:54.218238  538905 main.go:141] libmachine: (addons-396564) DBG | unable to find host DHCP lease matching {name: "addons-396564", mac: "52:54:00:86:dd:b4", ip: "192.168.39.9"} in network mk-addons-396564
	I1205 19:02:54.293917  538905 main.go:141] libmachine: (addons-396564) Reserved static IP address: 192.168.39.9
	I1205 19:02:54.293953  538905 main.go:141] libmachine: (addons-396564) DBG | Getting to WaitForSSH function...
	I1205 19:02:54.293961  538905 main.go:141] libmachine: (addons-396564) Waiting for SSH to be available...
	I1205 19:02:54.296405  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.296797  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:minikube Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:54.296834  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.297027  538905 main.go:141] libmachine: (addons-396564) DBG | Using SSH client type: external
	I1205 19:02:54.297051  538905 main.go:141] libmachine: (addons-396564) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa (-rw-------)
	I1205 19:02:54.297091  538905 main.go:141] libmachine: (addons-396564) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.9 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 19:02:54.297110  538905 main.go:141] libmachine: (addons-396564) DBG | About to run SSH command:
	I1205 19:02:54.297142  538905 main.go:141] libmachine: (addons-396564) DBG | exit 0
	I1205 19:02:54.428905  538905 main.go:141] libmachine: (addons-396564) DBG | SSH cmd err, output: <nil>: 
	I1205 19:02:54.429133  538905 main.go:141] libmachine: (addons-396564) KVM machine creation complete!
	I1205 19:02:54.429457  538905 main.go:141] libmachine: (addons-396564) Calling .GetConfigRaw
	I1205 19:02:54.430070  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:02:54.430276  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:02:54.430554  538905 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 19:02:54.430578  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:02:54.432004  538905 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 19:02:54.432024  538905 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 19:02:54.432031  538905 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 19:02:54.432037  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:02:54.434508  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.435033  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:54.435058  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.435295  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:02:54.435508  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:54.435790  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:54.435987  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:02:54.436181  538905 main.go:141] libmachine: Using SSH client type: native
	I1205 19:02:54.436496  538905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I1205 19:02:54.436513  538905 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 19:02:54.543902  538905 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:02:54.543938  538905 main.go:141] libmachine: Detecting the provisioner...
	I1205 19:02:54.543946  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:02:54.546761  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.547167  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:54.547205  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.547392  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:02:54.547604  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:54.547804  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:54.547927  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:02:54.548074  538905 main.go:141] libmachine: Using SSH client type: native
	I1205 19:02:54.548262  538905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I1205 19:02:54.548302  538905 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 19:02:54.662064  538905 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 19:02:54.662182  538905 main.go:141] libmachine: found compatible host: buildroot
	I1205 19:02:54.662195  538905 main.go:141] libmachine: Provisioning with buildroot...
	I1205 19:02:54.662204  538905 main.go:141] libmachine: (addons-396564) Calling .GetMachineName
	I1205 19:02:54.662497  538905 buildroot.go:166] provisioning hostname "addons-396564"
	I1205 19:02:54.662550  538905 main.go:141] libmachine: (addons-396564) Calling .GetMachineName
	I1205 19:02:54.662771  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:02:54.665508  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.665898  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:54.665930  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.666130  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:02:54.666322  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:54.666519  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:54.666697  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:02:54.666861  538905 main.go:141] libmachine: Using SSH client type: native
	I1205 19:02:54.667060  538905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I1205 19:02:54.667074  538905 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-396564 && echo "addons-396564" | sudo tee /etc/hostname
	I1205 19:02:54.794326  538905 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-396564
	
	I1205 19:02:54.794384  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:02:54.797379  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.797716  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:54.797744  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.797932  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:02:54.798140  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:54.798305  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:54.798449  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:02:54.798732  538905 main.go:141] libmachine: Using SSH client type: native
	I1205 19:02:54.798923  538905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I1205 19:02:54.798940  538905 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-396564' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-396564/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-396564' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:02:54.918401  538905 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:02:54.918433  538905 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 19:02:54.918459  538905 buildroot.go:174] setting up certificates
	I1205 19:02:54.918475  538905 provision.go:84] configureAuth start
	I1205 19:02:54.918484  538905 main.go:141] libmachine: (addons-396564) Calling .GetMachineName
	I1205 19:02:54.918771  538905 main.go:141] libmachine: (addons-396564) Calling .GetIP
	I1205 19:02:54.921280  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.921639  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:54.921668  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.921844  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:02:54.923686  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.924008  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:54.924040  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.924151  538905 provision.go:143] copyHostCerts
	I1205 19:02:54.924219  538905 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 19:02:54.924377  538905 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 19:02:54.924443  538905 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 19:02:54.924492  538905 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.addons-396564 san=[127.0.0.1 192.168.39.9 addons-396564 localhost minikube]
	I1205 19:02:55.073548  538905 provision.go:177] copyRemoteCerts
	I1205 19:02:55.073614  538905 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:02:55.073642  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:02:55.077543  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.078029  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:55.078053  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.078328  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:02:55.078560  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:55.078799  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:02:55.079011  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:02:55.163532  538905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 19:02:55.188145  538905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 19:02:55.212749  538905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 19:02:55.237949  538905 provision.go:87] duration metric: took 319.45828ms to configureAuth
	I1205 19:02:55.237984  538905 buildroot.go:189] setting minikube options for container-runtime
	I1205 19:02:55.238194  538905 config.go:182] Loaded profile config "addons-396564": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:02:55.238286  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:02:55.241223  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.241551  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:55.241577  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.241750  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:02:55.241974  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:55.242157  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:55.242359  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:02:55.242562  538905 main.go:141] libmachine: Using SSH client type: native
	I1205 19:02:55.242743  538905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I1205 19:02:55.242757  538905 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:02:55.500862  538905 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:02:55.500895  538905 main.go:141] libmachine: Checking connection to Docker...
	I1205 19:02:55.500905  538905 main.go:141] libmachine: (addons-396564) Calling .GetURL
	I1205 19:02:55.502418  538905 main.go:141] libmachine: (addons-396564) DBG | Using libvirt version 6000000
	I1205 19:02:55.504941  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.505303  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:55.505334  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.505479  538905 main.go:141] libmachine: Docker is up and running!
	I1205 19:02:55.505494  538905 main.go:141] libmachine: Reticulating splines...
	I1205 19:02:55.505503  538905 client.go:171] duration metric: took 24.458941374s to LocalClient.Create
	I1205 19:02:55.505536  538905 start.go:167] duration metric: took 24.459024763s to libmachine.API.Create "addons-396564"
	I1205 19:02:55.505551  538905 start.go:293] postStartSetup for "addons-396564" (driver="kvm2")
	I1205 19:02:55.505567  538905 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:02:55.505593  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:02:55.505888  538905 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:02:55.505917  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:02:55.508001  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.508342  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:55.508371  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.508538  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:02:55.508702  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:55.508853  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:02:55.508981  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:02:55.597167  538905 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:02:55.601604  538905 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 19:02:55.601634  538905 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 19:02:55.601725  538905 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 19:02:55.601759  538905 start.go:296] duration metric: took 96.19822ms for postStartSetup
	I1205 19:02:55.601809  538905 main.go:141] libmachine: (addons-396564) Calling .GetConfigRaw
	I1205 19:02:55.602468  538905 main.go:141] libmachine: (addons-396564) Calling .GetIP
	I1205 19:02:55.605049  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.605335  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:55.605366  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.605585  538905 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/config.json ...
	I1205 19:02:55.605819  538905 start.go:128] duration metric: took 24.578752053s to createHost
	I1205 19:02:55.605850  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:02:55.607839  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.608221  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:55.608251  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.608409  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:02:55.608602  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:55.608758  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:55.608918  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:02:55.609065  538905 main.go:141] libmachine: Using SSH client type: native
	I1205 19:02:55.609251  538905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I1205 19:02:55.609264  538905 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 19:02:55.717362  538905 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733425375.688966764
	
	I1205 19:02:55.717393  538905 fix.go:216] guest clock: 1733425375.688966764
	I1205 19:02:55.717401  538905 fix.go:229] Guest: 2024-12-05 19:02:55.688966764 +0000 UTC Remote: 2024-12-05 19:02:55.605834524 +0000 UTC m=+24.690421001 (delta=83.13224ms)
	I1205 19:02:55.717423  538905 fix.go:200] guest clock delta is within tolerance: 83.13224ms
	I1205 19:02:55.717429  538905 start.go:83] releasing machines lock for "addons-396564", held for 24.690480333s
	I1205 19:02:55.717451  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:02:55.717719  538905 main.go:141] libmachine: (addons-396564) Calling .GetIP
	I1205 19:02:55.720452  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.720802  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:55.720835  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.720954  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:02:55.721533  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:02:55.721724  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:02:55.721838  538905 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:02:55.721909  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:02:55.721941  538905 ssh_runner.go:195] Run: cat /version.json
	I1205 19:02:55.721967  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:02:55.724709  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.724870  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.725081  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:55.725107  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.725288  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:02:55.725405  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:55.725436  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.725482  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:55.725670  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:02:55.725678  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:02:55.725861  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:55.725855  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:02:55.725978  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:02:55.726138  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:02:55.828445  538905 ssh_runner.go:195] Run: systemctl --version
	I1205 19:02:55.834558  538905 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:02:55.997650  538905 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 19:02:56.004916  538905 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 19:02:56.005041  538905 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:02:56.022101  538905 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 19:02:56.022136  538905 start.go:495] detecting cgroup driver to use...
	I1205 19:02:56.022227  538905 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:02:56.038382  538905 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:02:56.053166  538905 docker.go:217] disabling cri-docker service (if available) ...
	I1205 19:02:56.053236  538905 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:02:56.067658  538905 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:02:56.082380  538905 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:02:56.203743  538905 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:02:56.359486  538905 docker.go:233] disabling docker service ...
	I1205 19:02:56.359581  538905 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:02:56.374940  538905 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:02:56.388245  538905 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:02:56.528365  538905 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:02:56.652910  538905 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:02:56.668303  538905 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:02:56.687811  538905 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 19:02:56.687876  538905 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:02:56.699758  538905 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:02:56.699828  538905 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:02:56.710994  538905 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:02:56.721827  538905 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:02:56.732840  538905 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:02:56.744109  538905 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:02:56.755349  538905 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:02:56.775027  538905 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:02:56.786975  538905 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:02:56.796769  538905 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 19:02:56.796862  538905 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 19:02:56.810530  538905 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:02:56.820415  538905 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:02:56.939260  538905 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:02:57.030837  538905 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:02:57.030939  538905 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:02:57.036149  538905 start.go:563] Will wait 60s for crictl version
	I1205 19:02:57.036240  538905 ssh_runner.go:195] Run: which crictl
	I1205 19:02:57.040118  538905 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:02:57.083305  538905 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 19:02:57.083428  538905 ssh_runner.go:195] Run: crio --version
	I1205 19:02:57.111637  538905 ssh_runner.go:195] Run: crio --version
	I1205 19:02:57.142930  538905 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 19:02:57.144340  538905 main.go:141] libmachine: (addons-396564) Calling .GetIP
	I1205 19:02:57.146939  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:57.147349  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:57.147438  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:57.147611  538905 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 19:02:57.152052  538905 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:02:57.165788  538905 kubeadm.go:883] updating cluster {Name:addons-396564 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-396564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 19:02:57.165921  538905 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:02:57.165990  538905 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:02:57.201069  538905 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 19:02:57.201161  538905 ssh_runner.go:195] Run: which lz4
	I1205 19:02:57.205635  538905 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 19:02:57.209913  538905 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 19:02:57.209957  538905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 19:02:58.602414  538905 crio.go:462] duration metric: took 1.396808897s to copy over tarball
	I1205 19:02:58.602508  538905 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 19:03:00.818046  538905 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.215473042s)
	I1205 19:03:00.818088  538905 crio.go:469] duration metric: took 2.215639844s to extract the tarball
	I1205 19:03:00.818099  538905 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 19:03:00.858572  538905 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:03:00.902879  538905 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 19:03:00.902911  538905 cache_images.go:84] Images are preloaded, skipping loading
	I1205 19:03:00.902925  538905 kubeadm.go:934] updating node { 192.168.39.9 8443 v1.31.2 crio true true} ...
	I1205 19:03:00.903084  538905 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-396564 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.9
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-396564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 19:03:00.903176  538905 ssh_runner.go:195] Run: crio config
	I1205 19:03:00.951344  538905 cni.go:84] Creating CNI manager for ""
	I1205 19:03:00.951372  538905 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 19:03:00.951384  538905 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 19:03:00.951406  538905 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.9 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-396564 NodeName:addons-396564 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.9"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.9 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 19:03:00.951548  538905 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.9
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-396564"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.9"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.9"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 19:03:00.951615  538905 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 19:03:00.963052  538905 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 19:03:00.963138  538905 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 19:03:00.972888  538905 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1205 19:03:00.989532  538905 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 19:03:01.006408  538905 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1205 19:03:01.024244  538905 ssh_runner.go:195] Run: grep 192.168.39.9	control-plane.minikube.internal$ /etc/hosts
	I1205 19:03:01.028317  538905 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.9	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:03:01.041736  538905 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:03:01.174098  538905 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:03:01.193014  538905 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564 for IP: 192.168.39.9
	I1205 19:03:01.193052  538905 certs.go:194] generating shared ca certs ...
	I1205 19:03:01.193080  538905 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:01.193289  538905 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 19:03:01.364949  538905 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt ...
	I1205 19:03:01.364986  538905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt: {Name:mkb0906d0eefc726a3bca7b5f1107c861696fa8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:01.365196  538905 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key ...
	I1205 19:03:01.365211  538905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key: {Name:mke5b97d4ab29c4390ef0b2f6566024d0db0ba91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:01.365318  538905 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 19:03:01.441933  538905 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt ...
	I1205 19:03:01.441972  538905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt: {Name:mk070fdb3f8a5db8d4547993257f562b7c79c1eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:01.442289  538905 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key ...
	I1205 19:03:01.442318  538905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key: {Name:mk8e32bf5e6761b3c50f4c9ba28815b32a22d987 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:01.442450  538905 certs.go:256] generating profile certs ...
	I1205 19:03:01.442517  538905 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.key
	I1205 19:03:01.442531  538905 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt with IP's: []
	I1205 19:03:01.651902  538905 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt ...
	I1205 19:03:01.651935  538905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: {Name:mk74617608404eaed6e3664672f5e26e12276e2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:01.652140  538905 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.key ...
	I1205 19:03:01.652158  538905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.key: {Name:mk51ac90223272f0a3070964a273b469b652346b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:01.652259  538905 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/apiserver.key.41add270
	I1205 19:03:01.652301  538905 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/apiserver.crt.41add270 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.9]
	I1205 19:03:01.803934  538905 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/apiserver.crt.41add270 ...
	I1205 19:03:01.803973  538905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/apiserver.crt.41add270: {Name:mk6f589e7c8dc32d5df66e511d67e9243b1d03b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:01.804160  538905 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/apiserver.key.41add270 ...
	I1205 19:03:01.804176  538905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/apiserver.key.41add270: {Name:mk9b1a71ff621c1f4832b4f504830ce477d5bf61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:01.804252  538905 certs.go:381] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/apiserver.crt.41add270 -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/apiserver.crt
	I1205 19:03:01.804409  538905 certs.go:385] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/apiserver.key.41add270 -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/apiserver.key
	I1205 19:03:01.804472  538905 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/proxy-client.key
	I1205 19:03:01.804493  538905 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/proxy-client.crt with IP's: []
	I1205 19:03:02.093089  538905 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/proxy-client.crt ...
	I1205 19:03:02.093129  538905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/proxy-client.crt: {Name:mk155e517c3bafdd635249d9a1d9c2ae1f557583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:02.093348  538905 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/proxy-client.key ...
	I1205 19:03:02.093366  538905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/proxy-client.key: {Name:mk1fca4c3033a8c71405b1d07ddd033cb4264799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:02.093604  538905 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 19:03:02.093651  538905 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 19:03:02.093688  538905 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:03:02.093719  538905 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 19:03:02.094398  538905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:03:02.122997  538905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 19:03:02.150079  538905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:03:02.177181  538905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 19:03:02.203403  538905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1205 19:03:02.229119  538905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 19:03:02.255473  538905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:03:02.281879  538905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 19:03:02.307258  538905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:03:02.331505  538905 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 19:03:02.348854  538905 ssh_runner.go:195] Run: openssl version
	I1205 19:03:02.355122  538905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:03:02.366546  538905 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:03:02.371241  538905 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:03:02.371318  538905 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:03:02.377265  538905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:03:02.388485  538905 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 19:03:02.392800  538905 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 19:03:02.392857  538905 kubeadm.go:392] StartCluster: {Name:addons-396564 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-396564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:03:02.392937  538905 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 19:03:02.392981  538905 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 19:03:02.434718  538905 cri.go:89] found id: ""
	I1205 19:03:02.434817  538905 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 19:03:02.445418  538905 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 19:03:02.455988  538905 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 19:03:02.466887  538905 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 19:03:02.466914  538905 kubeadm.go:157] found existing configuration files:
	
	I1205 19:03:02.466974  538905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 19:03:02.476641  538905 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 19:03:02.476712  538905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 19:03:02.487114  538905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 19:03:02.497174  538905 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 19:03:02.497265  538905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 19:03:02.507628  538905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 19:03:02.517718  538905 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 19:03:02.517777  538905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 19:03:02.528386  538905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 19:03:02.538820  538905 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 19:03:02.538963  538905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 19:03:02.549889  538905 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 19:03:02.748637  538905 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 19:03:12.860146  538905 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 19:03:12.860236  538905 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 19:03:12.860351  538905 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 19:03:12.860515  538905 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 19:03:12.860620  538905 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 19:03:12.860684  538905 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 19:03:12.862291  538905 out.go:235]   - Generating certificates and keys ...
	I1205 19:03:12.862388  538905 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 19:03:12.862462  538905 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 19:03:12.862563  538905 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 19:03:12.862642  538905 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1205 19:03:12.862712  538905 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1205 19:03:12.862757  538905 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1205 19:03:12.862807  538905 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1205 19:03:12.862912  538905 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-396564 localhost] and IPs [192.168.39.9 127.0.0.1 ::1]
	I1205 19:03:12.862963  538905 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1205 19:03:12.863075  538905 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-396564 localhost] and IPs [192.168.39.9 127.0.0.1 ::1]
	I1205 19:03:12.863178  538905 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 19:03:12.863291  538905 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 19:03:12.863357  538905 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1205 19:03:12.863440  538905 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 19:03:12.863526  538905 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 19:03:12.863609  538905 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 19:03:12.863691  538905 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 19:03:12.863787  538905 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 19:03:12.863869  538905 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 19:03:12.863980  538905 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 19:03:12.864072  538905 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 19:03:12.865604  538905 out.go:235]   - Booting up control plane ...
	I1205 19:03:12.865701  538905 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 19:03:12.865773  538905 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 19:03:12.865834  538905 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 19:03:12.865929  538905 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 19:03:12.866009  538905 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 19:03:12.866044  538905 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 19:03:12.866150  538905 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 19:03:12.866296  538905 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 19:03:12.866359  538905 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.396416ms
	I1205 19:03:12.866424  538905 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 19:03:12.866473  538905 kubeadm.go:310] [api-check] The API server is healthy after 5.001698189s
	I1205 19:03:12.866593  538905 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 19:03:12.866714  538905 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 19:03:12.866784  538905 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 19:03:12.867017  538905 kubeadm.go:310] [mark-control-plane] Marking the node addons-396564 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 19:03:12.867084  538905 kubeadm.go:310] [bootstrap-token] Using token: xx61i1.j99ndvasf8gy30az
	I1205 19:03:12.869309  538905 out.go:235]   - Configuring RBAC rules ...
	I1205 19:03:12.869421  538905 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 19:03:12.869519  538905 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 19:03:12.869729  538905 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 19:03:12.869892  538905 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 19:03:12.870045  538905 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 19:03:12.870117  538905 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 19:03:12.870231  538905 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 19:03:12.870277  538905 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 19:03:12.870323  538905 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 19:03:12.870329  538905 kubeadm.go:310] 
	I1205 19:03:12.870382  538905 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 19:03:12.870388  538905 kubeadm.go:310] 
	I1205 19:03:12.870482  538905 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 19:03:12.870491  538905 kubeadm.go:310] 
	I1205 19:03:12.870521  538905 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 19:03:12.870590  538905 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 19:03:12.870666  538905 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 19:03:12.870686  538905 kubeadm.go:310] 
	I1205 19:03:12.870762  538905 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 19:03:12.870768  538905 kubeadm.go:310] 
	I1205 19:03:12.870811  538905 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 19:03:12.870820  538905 kubeadm.go:310] 
	I1205 19:03:12.870863  538905 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 19:03:12.870943  538905 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 19:03:12.871007  538905 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 19:03:12.871013  538905 kubeadm.go:310] 
	I1205 19:03:12.871091  538905 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 19:03:12.871164  538905 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 19:03:12.871170  538905 kubeadm.go:310] 
	I1205 19:03:12.871242  538905 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xx61i1.j99ndvasf8gy30az \
	I1205 19:03:12.871336  538905 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 \
	I1205 19:03:12.871356  538905 kubeadm.go:310] 	--control-plane 
	I1205 19:03:12.871360  538905 kubeadm.go:310] 
	I1205 19:03:12.871440  538905 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 19:03:12.871450  538905 kubeadm.go:310] 
	I1205 19:03:12.871523  538905 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xx61i1.j99ndvasf8gy30az \
	I1205 19:03:12.871629  538905 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 
	I1205 19:03:12.871641  538905 cni.go:84] Creating CNI manager for ""
	I1205 19:03:12.871647  538905 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 19:03:12.873168  538905 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 19:03:12.874496  538905 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 19:03:12.888945  538905 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 19:03:12.913413  538905 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 19:03:12.913498  538905 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:03:12.913498  538905 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-396564 minikube.k8s.io/updated_at=2024_12_05T19_03_12_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=addons-396564 minikube.k8s.io/primary=true
	I1205 19:03:13.038908  538905 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:03:13.080346  538905 ops.go:34] apiserver oom_adj: -16
	I1205 19:03:13.539104  538905 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:03:14.039871  538905 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:03:14.539935  538905 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:03:15.039037  538905 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:03:15.539087  538905 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:03:16.039563  538905 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:03:16.539270  538905 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:03:17.039924  538905 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:03:17.137675  538905 kubeadm.go:1113] duration metric: took 4.224246396s to wait for elevateKubeSystemPrivileges
	I1205 19:03:17.137722  538905 kubeadm.go:394] duration metric: took 14.744870852s to StartCluster
	I1205 19:03:17.137748  538905 settings.go:142] acquiring lock: {Name:mk53b9e6d652790a330d8f10370186624dd74692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:17.137923  538905 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:03:17.138342  538905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:17.138591  538905 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 19:03:17.138609  538905 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:03:17.138682  538905 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1205 19:03:17.138828  538905 addons.go:69] Setting yakd=true in profile "addons-396564"
	I1205 19:03:17.138842  538905 addons.go:69] Setting inspektor-gadget=true in profile "addons-396564"
	I1205 19:03:17.138863  538905 addons.go:69] Setting volumesnapshots=true in profile "addons-396564"
	I1205 19:03:17.138870  538905 addons.go:234] Setting addon inspektor-gadget=true in "addons-396564"
	I1205 19:03:17.138878  538905 addons.go:234] Setting addon volumesnapshots=true in "addons-396564"
	I1205 19:03:17.138878  538905 addons.go:69] Setting metrics-server=true in profile "addons-396564"
	I1205 19:03:17.138879  538905 addons.go:69] Setting volcano=true in profile "addons-396564"
	I1205 19:03:17.138900  538905 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-396564"
	I1205 19:03:17.138909  538905 addons.go:234] Setting addon volcano=true in "addons-396564"
	I1205 19:03:17.138905  538905 config.go:182] Loaded profile config "addons-396564": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:03:17.138918  538905 addons.go:69] Setting registry=true in profile "addons-396564"
	I1205 19:03:17.138919  538905 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-396564"
	I1205 19:03:17.138923  538905 addons.go:69] Setting gcp-auth=true in profile "addons-396564"
	I1205 19:03:17.138929  538905 addons.go:234] Setting addon registry=true in "addons-396564"
	I1205 19:03:17.138931  538905 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-396564"
	I1205 19:03:17.138937  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.138941  538905 mustload.go:65] Loading cluster: addons-396564
	I1205 19:03:17.138949  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.138860  538905 addons.go:69] Setting storage-provisioner=true in profile "addons-396564"
	I1205 19:03:17.138955  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.138965  538905 addons.go:69] Setting ingress=true in profile "addons-396564"
	I1205 19:03:17.138968  538905 addons.go:234] Setting addon storage-provisioner=true in "addons-396564"
	I1205 19:03:17.138977  538905 addons.go:234] Setting addon ingress=true in "addons-396564"
	I1205 19:03:17.138991  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.139015  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.138855  538905 addons.go:234] Setting addon yakd=true in "addons-396564"
	I1205 19:03:17.139115  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.139122  538905 config.go:182] Loaded profile config "addons-396564": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:03:17.138910  538905 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-396564"
	I1205 19:03:17.139280  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.139402  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.138892  538905 addons.go:234] Setting addon metrics-server=true in "addons-396564"
	I1205 19:03:17.139438  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.139438  538905 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-396564"
	I1205 19:03:17.139440  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.139449  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.139472  538905 addons.go:69] Setting default-storageclass=true in profile "addons-396564"
	I1205 19:03:17.139477  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.139490  538905 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-396564"
	I1205 19:03:17.139491  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.139496  538905 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-396564"
	I1205 19:03:17.138910  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.139417  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.139516  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.139526  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.139536  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.139546  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.139568  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.139580  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.139612  538905 addons.go:69] Setting cloud-spanner=true in profile "addons-396564"
	I1205 19:03:17.139623  538905 addons.go:234] Setting addon cloud-spanner=true in "addons-396564"
	I1205 19:03:17.138912  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.139631  538905 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-396564"
	I1205 19:03:17.139646  538905 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-396564"
	I1205 19:03:17.138944  538905 addons.go:69] Setting ingress-dns=true in profile "addons-396564"
	I1205 19:03:17.139734  538905 addons.go:234] Setting addon ingress-dns=true in "addons-396564"
	I1205 19:03:17.139797  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.139834  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.139916  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.139946  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.139964  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.139969  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.139999  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.140005  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.140007  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.139921  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.140152  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.139933  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.139925  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.140441  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.140635  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.140668  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.139918  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.141601  538905 out.go:177] * Verifying Kubernetes components...
	I1205 19:03:17.143429  538905 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:03:17.152666  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.152731  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.153121  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.153163  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.158728  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.158794  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.162903  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41515
	I1205 19:03:17.167176  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45151
	I1205 19:03:17.167941  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.168683  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.168709  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.168795  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.169319  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.169950  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.170000  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.170747  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.170778  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.171266  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.171939  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.171983  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.174673  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36641
	I1205 19:03:17.175336  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.175996  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.176026  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.176464  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.177275  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36839
	I1205 19:03:17.177654  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.177880  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46871
	I1205 19:03:17.178165  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.178180  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.178609  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.179261  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.179320  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.179936  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.180864  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.180882  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.180932  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.181006  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.181641  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.182313  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.182357  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.190692  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32927
	I1205 19:03:17.200001  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.200850  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.200877  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.201419  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.201758  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33749
	I1205 19:03:17.202109  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.202171  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.202191  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40869
	I1205 19:03:17.202745  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.202884  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.203329  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.203351  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.203530  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.203546  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.204027  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.204637  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.204689  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.205513  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42447
	I1205 19:03:17.206109  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.206758  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.206776  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.207195  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.207794  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.207836  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.208032  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36735
	I1205 19:03:17.208519  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.208805  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36107
	I1205 19:03:17.209058  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.209073  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.209452  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.209924  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.209944  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.210325  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.210699  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.210758  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.214611  538905 addons.go:234] Setting addon default-storageclass=true in "addons-396564"
	I1205 19:03:17.214660  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.215034  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.215076  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.215360  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.215812  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.215851  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.216471  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.216504  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.218736  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41333
	I1205 19:03:17.219325  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.219964  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.219983  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.220414  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.220596  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.223748  538905 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-396564"
	I1205 19:03:17.223799  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.224163  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.224207  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.226251  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33919
	I1205 19:03:17.228216  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.228856  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.228886  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.229072  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36639
	I1205 19:03:17.229308  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.229456  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.229651  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.230237  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.230256  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.230558  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37935
	I1205 19:03:17.230755  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.231379  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.231425  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.231669  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.232250  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.232280  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.232690  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.232866  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.234739  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.236688  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33949
	I1205 19:03:17.237159  538905 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1205 19:03:17.237565  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.238368  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40781
	I1205 19:03:17.238605  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.238629  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.238745  538905 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 19:03:17.238762  538905 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 19:03:17.238794  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:17.240555  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41223
	I1205 19:03:17.240754  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.241171  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.241219  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.241747  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.242370  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.242390  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.242458  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33275
	I1205 19:03:17.242818  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.242900  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38737
	I1205 19:03:17.243226  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.243264  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.244048  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:17.244081  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.244398  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:17.244614  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:17.244807  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:17.244883  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37261
	I1205 19:03:17.245298  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:17.245422  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34543
	I1205 19:03:17.245866  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.246637  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.246656  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.247197  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.247697  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.248391  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.248629  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.250408  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43521
	I1205 19:03:17.250581  538905 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1205 19:03:17.250833  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.252639  538905 out.go:177]   - Using image docker.io/registry:2.8.3
	I1205 19:03:17.252965  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.253030  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.253384  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.253973  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.254000  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.254070  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43293
	I1205 19:03:17.254185  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.254357  538905 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1205 19:03:17.254380  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1205 19:03:17.254400  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:17.254472  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.254594  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.254731  538905 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 19:03:17.256978  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42769
	I1205 19:03:17.257067  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.257084  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.257179  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.257220  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.257236  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.257264  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.257307  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.257311  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35447
	I1205 19:03:17.257427  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.257438  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.257482  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43135
	I1205 19:03:17.258243  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.258337  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.258345  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.258358  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.258365  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.258414  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.258442  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.258606  538905 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:03:17.258622  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 19:03:17.258641  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:17.258652  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.258713  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.259026  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.259051  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.259028  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.259106  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.259187  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.259199  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.259475  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.259573  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.260156  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.260201  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.260295  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.260728  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.260848  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.260956  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.261268  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.261579  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.261596  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.262196  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.262217  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.262234  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.262549  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.263049  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.263081  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:17.263098  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.263129  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.263244  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.263293  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.263579  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.263640  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.263973  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:17.264019  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.264045  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:17.264337  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:17.264611  538905 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1205 19:03:17.264623  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:17.264650  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:17.264615  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:17.264671  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:17.264684  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:17.264691  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:17.265710  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:17.265958  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:17.266069  538905 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1205 19:03:17.266211  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:17.266256  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.266276  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:17.266476  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	W1205 19:03:17.266559  538905 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1205 19:03:17.266990  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.267858  538905 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1205 19:03:17.268598  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.268729  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.268762  538905 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1205 19:03:17.268930  538905 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1205 19:03:17.269000  538905 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1205 19:03:17.269058  538905 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1205 19:03:17.269747  538905 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1205 19:03:17.269776  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:17.269180  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:17.269424  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:17.269833  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.270201  538905 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 19:03:17.270219  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1205 19:03:17.270237  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:17.270336  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:17.270531  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:17.270703  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:17.271214  538905 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1205 19:03:17.271233  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1205 19:03:17.271250  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:17.271564  538905 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1205 19:03:17.271728  538905 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1205 19:03:17.271776  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:17.272088  538905 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1205 19:03:17.274717  538905 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 19:03:17.274745  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1205 19:03:17.274766  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:17.276130  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.276178  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36215
	I1205 19:03:17.276710  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.277058  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:17.277089  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.277302  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:17.277536  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:17.277696  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:17.277883  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:17.278013  538905 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1205 19:03:17.278397  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.278419  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.278799  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.278990  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.279551  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.279727  538905 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1205 19:03:17.279751  538905 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1205 19:03:17.279782  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:17.282341  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.284441  538905 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1205 19:03:17.286245  538905 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1205 19:03:17.286274  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1205 19:03:17.286304  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:17.287847  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.288505  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.288719  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39173
	I1205 19:03:17.288976  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:17.288999  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.289063  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33959
	I1205 19:03:17.289175  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.289236  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39351
	I1205 19:03:17.289830  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.289857  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.289939  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.289954  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.289957  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.290026  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:17.290505  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.290549  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:17.290569  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.290598  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.290598  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:17.290503  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.290641  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.290716  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:17.291374  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.291432  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.291490  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.291498  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:17.291505  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.291513  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.291538  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:17.291556  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:17.291609  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:17.291624  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.291654  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:17.291672  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.291805  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:17.291961  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:17.291998  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.292034  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.292227  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:17.292238  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.292331  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:17.292335  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:17.292262  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:17.292508  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:17.292565  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:17.292601  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:17.292638  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:17.292743  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:17.292889  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:17.292943  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.293760  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:17.293758  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:17.294020  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:17.294433  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.294873  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:17.294892  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.295094  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:17.295342  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.295392  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:17.295436  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.295955  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:17.296010  538905 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 19:03:17.296026  538905 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 19:03:17.296053  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:17.296131  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:17.296689  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43991
	I1205 19:03:17.297173  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.297713  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.297732  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.297756  538905 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1205 19:03:17.298293  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.298463  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.299415  538905 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 19:03:17.299434  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1205 19:03:17.299452  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:17.299483  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.300707  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:17.300732  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.300995  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:17.301197  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:17.301398  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:17.301532  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:17.303183  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.303639  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:17.303661  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.303909  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:17.304115  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:17.304253  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:17.304394  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:17.309978  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43233
	I1205 19:03:17.310453  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.311077  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.311103  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.311447  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.311706  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.313508  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.315532  538905 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1205 19:03:17.317242  538905 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1205 19:03:17.318690  538905 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1205 19:03:17.320299  538905 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1205 19:03:17.320540  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35631
	I1205 19:03:17.320966  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.321573  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.321597  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.321996  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.322244  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.323204  538905 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1205 19:03:17.324251  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.326052  538905 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1205 19:03:17.326052  538905 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1205 19:03:17.327988  538905 out.go:177]   - Using image docker.io/busybox:stable
	I1205 19:03:17.328022  538905 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1205 19:03:17.329516  538905 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 19:03:17.329542  538905 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1205 19:03:17.329542  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1205 19:03:17.329574  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:17.330987  538905 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1205 19:03:17.331050  538905 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1205 19:03:17.331081  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:17.333003  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.333437  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:17.333509  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.333679  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:17.333863  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:17.334040  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:17.334198  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:17.335147  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.335551  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:17.335572  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.335852  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:17.336035  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:17.336191  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:17.336373  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:17.724756  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 19:03:17.779152  538905 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1205 19:03:17.779189  538905 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1205 19:03:17.792340  538905 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1205 19:03:17.792377  538905 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1205 19:03:17.830257  538905 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:03:17.830281  538905 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 19:03:17.837448  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 19:03:17.850371  538905 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1205 19:03:17.850409  538905 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1205 19:03:17.858190  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 19:03:17.875074  538905 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1205 19:03:17.875110  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1205 19:03:17.887596  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1205 19:03:17.913718  538905 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 19:03:17.913744  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1205 19:03:17.930259  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1205 19:03:17.939407  538905 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1205 19:03:17.939437  538905 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1205 19:03:17.941437  538905 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1205 19:03:17.941462  538905 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1205 19:03:17.944453  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 19:03:17.972559  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 19:03:17.974862  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:03:18.051969  538905 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1205 19:03:18.052011  538905 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1205 19:03:18.083841  538905 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1205 19:03:18.083878  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1205 19:03:18.198115  538905 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1205 19:03:18.198153  538905 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1205 19:03:18.211074  538905 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1205 19:03:18.211113  538905 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1205 19:03:18.264034  538905 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 19:03:18.264074  538905 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 19:03:18.274961  538905 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1205 19:03:18.275008  538905 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1205 19:03:18.279065  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1205 19:03:18.332125  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1205 19:03:18.339471  538905 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1205 19:03:18.339500  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1205 19:03:18.523917  538905 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 19:03:18.523946  538905 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 19:03:18.537800  538905 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1205 19:03:18.537840  538905 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1205 19:03:18.537977  538905 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1205 19:03:18.538010  538905 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1205 19:03:18.662179  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1205 19:03:18.751655  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 19:03:18.857084  538905 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 19:03:18.857120  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1205 19:03:19.003627  538905 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1205 19:03:19.003663  538905 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1205 19:03:19.206662  538905 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1205 19:03:19.206701  538905 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1205 19:03:19.340888  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 19:03:19.607772  538905 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1205 19:03:19.607799  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1205 19:03:19.860002  538905 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1205 19:03:19.860041  538905 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1205 19:03:20.146466  538905 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1205 19:03:20.146501  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1205 19:03:20.537380  538905 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1205 19:03:20.537406  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1205 19:03:20.864745  538905 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 19:03:20.864778  538905 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1205 19:03:21.124829  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 19:03:21.301882  538905 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.471560987s)
	I1205 19:03:21.301923  538905 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1205 19:03:21.301925  538905 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.471626852s)
	I1205 19:03:21.301993  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.577184374s)
	I1205 19:03:21.302045  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:21.302061  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:21.302408  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:21.302424  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:21.302434  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:21.302441  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:21.302991  538905 node_ready.go:35] waiting up to 6m0s for node "addons-396564" to be "Ready" ...
	I1205 19:03:21.303143  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:21.303168  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:21.321487  538905 node_ready.go:49] node "addons-396564" has status "Ready":"True"
	I1205 19:03:21.321516  538905 node_ready.go:38] duration metric: took 18.493638ms for node "addons-396564" to be "Ready" ...
	I1205 19:03:21.321525  538905 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:03:21.346139  538905 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-xcvzc" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:21.864801  538905 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-396564" context rescaled to 1 replicas
	I1205 19:03:23.391110  538905 pod_ready.go:103] pod "amd-gpu-device-plugin-xcvzc" in "kube-system" namespace has status "Ready":"False"
	I1205 19:03:24.307070  538905 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1205 19:03:24.307112  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:24.310606  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:24.311058  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:24.311090  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:24.311317  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:24.311526  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:24.311697  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:24.311882  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:24.782838  538905 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1205 19:03:24.978764  538905 addons.go:234] Setting addon gcp-auth=true in "addons-396564"
	I1205 19:03:24.978836  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:24.979296  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:24.979338  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:24.995795  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43057
	I1205 19:03:24.996256  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:24.996764  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:24.996787  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:24.997194  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:24.997684  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:24.997715  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:25.013546  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42945
	I1205 19:03:25.014063  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:25.014568  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:25.014593  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:25.014904  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:25.015108  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:25.016691  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:25.016960  538905 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1205 19:03:25.016991  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:25.019536  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:25.019989  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:25.020018  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:25.020212  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:25.020465  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:25.020663  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:25.020822  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:25.414722  538905 pod_ready.go:103] pod "amd-gpu-device-plugin-xcvzc" in "kube-system" namespace has status "Ready":"False"
	I1205 19:03:27.148875  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.31137642s)
	I1205 19:03:27.148894  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.290667168s)
	I1205 19:03:27.148938  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.148952  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.148965  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.261328317s)
	I1205 19:03:27.148973  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.149047  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.149064  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (9.218759745s)
	I1205 19:03:27.149006  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.149084  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.149086  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.149098  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.149139  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.204658596s)
	I1205 19:03:27.149162  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.149172  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.149212  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.176621075s)
	I1205 19:03:27.149229  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.149237  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.149257  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.174370206s)
	I1205 19:03:27.149272  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.149280  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.149332  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (8.870241709s)
	I1205 19:03:27.149349  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.149356  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.149363  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.817208999s)
	I1205 19:03:27.149379  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.149387  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.149421  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.487212787s)
	I1205 19:03:27.149441  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.149451  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.149482  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.149489  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.397794878s)
	I1205 19:03:27.149507  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.149508  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.149518  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.149520  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.149528  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.149532  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.149535  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.149550  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.149561  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.149572  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.149579  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.149658  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.80872843s)
	I1205 19:03:27.149660  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.149674  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.149682  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.149691  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	W1205 19:03:27.149690  538905 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 19:03:27.149739  538905 retry.go:31] will retry after 170.150372ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 19:03:27.149800  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.149841  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.149848  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.149855  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.149862  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.150214  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.150231  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.150251  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.150258  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.150314  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.150323  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.150331  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.150338  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.150660  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.150699  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.150864  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.150876  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.150884  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.151384  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.151411  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.151418  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.151428  538905 addons.go:475] Verifying addon ingress=true in "addons-396564"
	I1205 19:03:27.152811  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.152846  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.152853  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.152861  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.152868  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.152917  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.152935  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.152941  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.152948  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.152954  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.153401  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.153430  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.153437  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.153447  538905 addons.go:475] Verifying addon registry=true in "addons-396564"
	I1205 19:03:27.154033  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.154065  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.154071  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.154078  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.154084  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.155127  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.155167  538905 out.go:177] * Verifying ingress addon...
	I1205 19:03:27.155350  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.155364  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.155501  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.155512  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.155522  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.155552  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.155559  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.155617  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.155639  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.155645  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.155686  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.155724  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.155730  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.155740  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.155746  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.155773  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.155795  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.155796  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.155805  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.155811  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.155834  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.155842  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.155883  538905 out.go:177] * Verifying registry addon...
	I1205 19:03:27.155955  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.155965  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.155995  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.156042  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.156049  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.156320  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.156384  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.156392  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.156400  538905 addons.go:475] Verifying addon metrics-server=true in "addons-396564"
	I1205 19:03:27.159282  538905 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-396564 service yakd-dashboard -n yakd-dashboard
	
	I1205 19:03:27.160258  538905 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1205 19:03:27.160285  538905 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1205 19:03:27.213140  538905 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 19:03:27.213171  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:27.215940  538905 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1205 19:03:27.215968  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:27.234522  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.234557  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.234935  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.234955  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	W1205 19:03:27.235070  538905 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1205 19:03:27.261016  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.261047  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.261341  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.261363  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.320084  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 19:03:27.675760  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:27.676774  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:27.857884  538905 pod_ready.go:103] pod "amd-gpu-device-plugin-xcvzc" in "kube-system" namespace has status "Ready":"False"
	I1205 19:03:28.172505  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:28.172512  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:28.213595  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.088690643s)
	I1205 19:03:28.213635  538905 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.196647686s)
	I1205 19:03:28.213666  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:28.213685  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:28.213980  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:28.213984  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:28.214001  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:28.214036  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:28.214045  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:28.214473  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:28.214528  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:28.214543  538905 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-396564"
	I1205 19:03:28.215861  538905 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1205 19:03:28.216866  538905 out.go:177] * Verifying csi-hostpath-driver addon...
	I1205 19:03:28.218797  538905 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1205 19:03:28.219941  538905 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1205 19:03:28.220343  538905 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1205 19:03:28.220372  538905 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1205 19:03:28.247520  538905 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 19:03:28.247545  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:28.312081  538905 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1205 19:03:28.312116  538905 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1205 19:03:28.397439  538905 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 19:03:28.397468  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1205 19:03:28.490562  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 19:03:28.664861  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:28.665427  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:28.724707  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:29.009861  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.689710769s)
	I1205 19:03:29.009943  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:29.009959  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:29.010391  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:29.010415  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:29.010425  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:29.010451  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:29.010510  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:29.010926  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:29.010948  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:29.010952  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:29.166360  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:29.167080  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:29.224362  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:29.671192  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:29.675772  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:29.757603  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:29.888009  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.397401133s)
	I1205 19:03:29.888074  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:29.888086  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:29.888411  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:29.888494  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:29.888517  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:29.888519  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:29.888528  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:29.888801  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:29.888846  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:29.890456  538905 addons.go:475] Verifying addon gcp-auth=true in "addons-396564"
	I1205 19:03:29.892199  538905 out.go:177] * Verifying gcp-auth addon...
	I1205 19:03:29.894491  538905 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1205 19:03:29.901008  538905 pod_ready.go:103] pod "amd-gpu-device-plugin-xcvzc" in "kube-system" namespace has status "Ready":"False"
	I1205 19:03:29.922960  538905 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1205 19:03:29.922994  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:30.168826  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:30.169312  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:30.272681  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:30.404420  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:30.668057  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:30.668830  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:30.768177  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:30.901474  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:31.165318  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:31.165506  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:31.225852  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:31.398371  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:31.665113  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:31.666922  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:31.724941  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:31.899258  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:32.165813  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:32.165958  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:32.225196  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:32.352784  538905 pod_ready.go:103] pod "amd-gpu-device-plugin-xcvzc" in "kube-system" namespace has status "Ready":"False"
	I1205 19:03:32.397888  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:32.678119  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:32.678669  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:32.724407  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:32.898533  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:33.164459  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:33.164888  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:33.226391  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:33.398836  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:33.665504  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:33.665695  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:33.727046  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:33.898412  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:34.166250  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:34.166280  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:34.225529  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:34.398691  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:34.665189  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:34.665420  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:34.725312  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:34.858878  538905 pod_ready.go:103] pod "amd-gpu-device-plugin-xcvzc" in "kube-system" namespace has status "Ready":"False"
	I1205 19:03:34.897867  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:35.165892  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:35.166074  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:35.225338  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:35.399215  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:35.667622  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:35.667861  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:35.724931  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:35.898337  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:36.167323  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:36.167894  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:36.226530  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:36.398930  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:36.664756  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:36.665830  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:36.725337  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:36.902761  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:37.166500  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:37.166672  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:37.225350  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:37.361251  538905 pod_ready.go:103] pod "amd-gpu-device-plugin-xcvzc" in "kube-system" namespace has status "Ready":"False"
	I1205 19:03:37.399348  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:37.666471  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:37.667929  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:37.727619  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:37.898659  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:38.165413  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:38.165773  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:38.225002  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:38.398811  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:38.665868  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:38.666379  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:38.724937  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:38.897904  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:39.169969  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:39.171046  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:39.267889  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:39.353755  538905 pod_ready.go:93] pod "amd-gpu-device-plugin-xcvzc" in "kube-system" namespace has status "Ready":"True"
	I1205 19:03:39.353780  538905 pod_ready.go:82] duration metric: took 18.007611624s for pod "amd-gpu-device-plugin-xcvzc" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:39.353791  538905 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hls42" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:39.356712  538905 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-hls42" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-hls42" not found
	I1205 19:03:39.356734  538905 pod_ready.go:82] duration metric: took 2.937552ms for pod "coredns-7c65d6cfc9-hls42" in "kube-system" namespace to be "Ready" ...
	E1205 19:03:39.356745  538905 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-hls42" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-hls42" not found
	I1205 19:03:39.356752  538905 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jz7lb" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:39.362331  538905 pod_ready.go:93] pod "coredns-7c65d6cfc9-jz7lb" in "kube-system" namespace has status "Ready":"True"
	I1205 19:03:39.362354  538905 pod_ready.go:82] duration metric: took 5.590777ms for pod "coredns-7c65d6cfc9-jz7lb" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:39.362364  538905 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-396564" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:39.366439  538905 pod_ready.go:93] pod "etcd-addons-396564" in "kube-system" namespace has status "Ready":"True"
	I1205 19:03:39.366457  538905 pod_ready.go:82] duration metric: took 4.085046ms for pod "etcd-addons-396564" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:39.366465  538905 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-396564" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:39.370515  538905 pod_ready.go:93] pod "kube-apiserver-addons-396564" in "kube-system" namespace has status "Ready":"True"
	I1205 19:03:39.370533  538905 pod_ready.go:82] duration metric: took 4.059957ms for pod "kube-apiserver-addons-396564" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:39.370541  538905 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-396564" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:39.398216  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:39.551873  538905 pod_ready.go:93] pod "kube-controller-manager-addons-396564" in "kube-system" namespace has status "Ready":"True"
	I1205 19:03:39.551902  538905 pod_ready.go:82] duration metric: took 181.352174ms for pod "kube-controller-manager-addons-396564" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:39.551917  538905 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r9sk8" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:39.665165  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:39.665337  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:39.725395  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:39.898316  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:39.950847  538905 pod_ready.go:93] pod "kube-proxy-r9sk8" in "kube-system" namespace has status "Ready":"True"
	I1205 19:03:39.950873  538905 pod_ready.go:82] duration metric: took 398.949152ms for pod "kube-proxy-r9sk8" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:39.950883  538905 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-396564" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:40.164597  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:40.167587  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:40.225583  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:40.350845  538905 pod_ready.go:93] pod "kube-scheduler-addons-396564" in "kube-system" namespace has status "Ready":"True"
	I1205 19:03:40.350872  538905 pod_ready.go:82] duration metric: took 399.983082ms for pod "kube-scheduler-addons-396564" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:40.350883  538905 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-pngv4" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:40.398572  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:40.666092  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:40.666584  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:40.725789  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:40.898728  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:41.165494  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:41.165930  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:41.224921  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:41.399355  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:41.666376  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:41.666828  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:41.724370  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:41.899184  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:42.168571  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:42.168801  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:42.226022  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:42.357785  538905 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-pngv4" in "kube-system" namespace has status "Ready":"False"
	I1205 19:03:42.397808  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:42.665153  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:42.666686  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:42.726451  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:42.898609  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:43.165012  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:43.165499  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:43.225987  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:43.398093  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:43.666133  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:43.666380  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:43.725867  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:43.898987  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:44.165607  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:44.167003  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:44.225504  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:44.399377  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:44.666241  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:44.666824  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:44.725679  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:44.857401  538905 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-pngv4" in "kube-system" namespace has status "Ready":"False"
	I1205 19:03:44.899114  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:45.165502  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:45.167101  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:45.225505  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:45.397873  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:45.665510  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:45.665816  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:45.724116  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:45.899057  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:46.165471  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:46.166833  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:46.225349  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:46.398338  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:46.930302  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:46.930796  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:46.930885  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:46.931867  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:46.936088  538905 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-pngv4" in "kube-system" namespace has status "Ready":"False"
	I1205 19:03:47.167978  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:47.168556  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:47.225358  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:47.398876  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:47.665371  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:47.665846  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:47.725691  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:47.898456  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:48.165222  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:48.166569  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:48.225020  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:48.399296  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:48.666193  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:48.668447  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:48.724921  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:48.898465  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:49.449560  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:49.449783  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:49.449805  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:49.453051  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:49.454348  538905 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-pngv4" in "kube-system" namespace has status "Ready":"False"
	I1205 19:03:49.663697  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:49.665037  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:49.724793  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:49.897904  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:50.164920  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:50.165343  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:50.225344  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:50.398459  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:50.695264  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:50.695483  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:50.997259  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:50.997805  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:51.165133  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:51.165167  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:51.224793  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:51.403994  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:51.666148  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:51.666284  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:51.725010  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:51.857649  538905 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-pngv4" in "kube-system" namespace has status "Ready":"False"
	I1205 19:03:51.898914  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:52.165548  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:52.165956  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:52.224531  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:52.398780  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:52.665463  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:52.665587  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:52.724214  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:52.898550  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:53.165811  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:53.165871  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:53.225496  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:53.397843  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:53.665453  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:53.665844  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:53.725052  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:53.898694  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:54.164902  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:54.165050  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:54.226000  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:54.356971  538905 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-pngv4" in "kube-system" namespace has status "Ready":"False"
	I1205 19:03:54.398410  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:54.664868  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:54.665252  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:54.724836  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:54.856716  538905 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-pngv4" in "kube-system" namespace has status "Ready":"True"
	I1205 19:03:54.856746  538905 pod_ready.go:82] duration metric: took 14.505855224s for pod "nvidia-device-plugin-daemonset-pngv4" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:54.856758  538905 pod_ready.go:39] duration metric: took 33.53522113s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:03:54.856779  538905 api_server.go:52] waiting for apiserver process to appear ...
	I1205 19:03:54.856830  538905 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 19:03:54.874892  538905 api_server.go:72] duration metric: took 37.736246416s to wait for apiserver process to appear ...
	I1205 19:03:54.874930  538905 api_server.go:88] waiting for apiserver healthz status ...
	I1205 19:03:54.874954  538905 api_server.go:253] Checking apiserver healthz at https://192.168.39.9:8443/healthz ...
	I1205 19:03:54.879526  538905 api_server.go:279] https://192.168.39.9:8443/healthz returned 200:
	ok
	I1205 19:03:54.880515  538905 api_server.go:141] control plane version: v1.31.2
	I1205 19:03:54.880544  538905 api_server.go:131] duration metric: took 5.605685ms to wait for apiserver health ...
	I1205 19:03:54.880556  538905 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 19:03:54.889469  538905 system_pods.go:59] 18 kube-system pods found
	I1205 19:03:54.889501  538905 system_pods.go:61] "amd-gpu-device-plugin-xcvzc" [89313f55-0769-4cd7-af1d-e97c6833dcef] Running
	I1205 19:03:54.889507  538905 system_pods.go:61] "coredns-7c65d6cfc9-jz7lb" [56b461df-6acc-4973-9067-3d64d678111c] Running
	I1205 19:03:54.889514  538905 system_pods.go:61] "csi-hostpath-attacher-0" [8e9fadd5-acf2-477a-9d62-c47987d16129] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1205 19:03:54.889520  538905 system_pods.go:61] "csi-hostpath-resizer-0" [432c94bf-2efd-467c-95cb-1aa632b845cc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1205 19:03:54.889527  538905 system_pods.go:61] "csi-hostpathplugin-64t5f" [5be510d8-669e-43b3-9429-cfb59274f96d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 19:03:54.889531  538905 system_pods.go:61] "etcd-addons-396564" [6f990ebe-6f3e-4cd3-81d8-ba9f8b3013a3] Running
	I1205 19:03:54.889535  538905 system_pods.go:61] "kube-apiserver-addons-396564" [119c3cdb-12a4-45b6-a46a-b42bcc85bd84] Running
	I1205 19:03:54.889538  538905 system_pods.go:61] "kube-controller-manager-addons-396564" [83c3fd83-132d-4811-a930-3e91899ce37e] Running
	I1205 19:03:54.889545  538905 system_pods.go:61] "kube-ingress-dns-minikube" [364ca423-ae05-4a12-a6fc-11a86e3213ba] Running
	I1205 19:03:54.889549  538905 system_pods.go:61] "kube-proxy-r9sk8" [f3d31a62-b4c2-4d67-801b-a8623f03af65] Running
	I1205 19:03:54.889555  538905 system_pods.go:61] "kube-scheduler-addons-396564" [58a5b5ae-c488-445a-ae98-8396aae2efce] Running
	I1205 19:03:54.889560  538905 system_pods.go:61] "metrics-server-84c5f94fbc-p7wrj" [3aec8457-6ee0-4eeb-9abe-871b30996d06] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 19:03:54.889566  538905 system_pods.go:61] "nvidia-device-plugin-daemonset-pngv4" [53fc8bbc-5529-4aaf-81c2-c11c9b882577] Running
	I1205 19:03:54.889571  538905 system_pods.go:61] "registry-66c9cd494c-ljr8x" [0b9f7adc-96cd-4c61-aab5-70400f03a848] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1205 19:03:54.889578  538905 system_pods.go:61] "registry-proxy-jzvwd" [7d2f7d65-082f-42f9-a2e0-4329066b06c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1205 19:03:54.889584  538905 system_pods.go:61] "snapshot-controller-56fcc65765-4kxc6" [b3247ae3-203c-44f6-82e8-ef0144eb6497] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 19:03:54.889593  538905 system_pods.go:61] "snapshot-controller-56fcc65765-7w2w5" [b4d32957-684f-41c3-947a-ddc8a4d8fb33] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 19:03:54.889597  538905 system_pods.go:61] "storage-provisioner" [723d3daa-3e07-4da6-ab13-d88904d4c881] Running
	I1205 19:03:54.889605  538905 system_pods.go:74] duration metric: took 9.042981ms to wait for pod list to return data ...
	I1205 19:03:54.889615  538905 default_sa.go:34] waiting for default service account to be created ...
	I1205 19:03:54.892165  538905 default_sa.go:45] found service account: "default"
	I1205 19:03:54.892199  538905 default_sa.go:55] duration metric: took 2.570951ms for default service account to be created ...
	I1205 19:03:54.892211  538905 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 19:03:54.898297  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:54.899234  538905 system_pods.go:86] 18 kube-system pods found
	I1205 19:03:54.899261  538905 system_pods.go:89] "amd-gpu-device-plugin-xcvzc" [89313f55-0769-4cd7-af1d-e97c6833dcef] Running
	I1205 19:03:54.899269  538905 system_pods.go:89] "coredns-7c65d6cfc9-jz7lb" [56b461df-6acc-4973-9067-3d64d678111c] Running
	I1205 19:03:54.899276  538905 system_pods.go:89] "csi-hostpath-attacher-0" [8e9fadd5-acf2-477a-9d62-c47987d16129] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1205 19:03:54.899285  538905 system_pods.go:89] "csi-hostpath-resizer-0" [432c94bf-2efd-467c-95cb-1aa632b845cc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1205 19:03:54.899292  538905 system_pods.go:89] "csi-hostpathplugin-64t5f" [5be510d8-669e-43b3-9429-cfb59274f96d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 19:03:54.899297  538905 system_pods.go:89] "etcd-addons-396564" [6f990ebe-6f3e-4cd3-81d8-ba9f8b3013a3] Running
	I1205 19:03:54.899301  538905 system_pods.go:89] "kube-apiserver-addons-396564" [119c3cdb-12a4-45b6-a46a-b42bcc85bd84] Running
	I1205 19:03:54.899305  538905 system_pods.go:89] "kube-controller-manager-addons-396564" [83c3fd83-132d-4811-a930-3e91899ce37e] Running
	I1205 19:03:54.899310  538905 system_pods.go:89] "kube-ingress-dns-minikube" [364ca423-ae05-4a12-a6fc-11a86e3213ba] Running
	I1205 19:03:54.899313  538905 system_pods.go:89] "kube-proxy-r9sk8" [f3d31a62-b4c2-4d67-801b-a8623f03af65] Running
	I1205 19:03:54.899317  538905 system_pods.go:89] "kube-scheduler-addons-396564" [58a5b5ae-c488-445a-ae98-8396aae2efce] Running
	I1205 19:03:54.899322  538905 system_pods.go:89] "metrics-server-84c5f94fbc-p7wrj" [3aec8457-6ee0-4eeb-9abe-871b30996d06] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 19:03:54.899326  538905 system_pods.go:89] "nvidia-device-plugin-daemonset-pngv4" [53fc8bbc-5529-4aaf-81c2-c11c9b882577] Running
	I1205 19:03:54.899332  538905 system_pods.go:89] "registry-66c9cd494c-ljr8x" [0b9f7adc-96cd-4c61-aab5-70400f03a848] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1205 19:03:54.899339  538905 system_pods.go:89] "registry-proxy-jzvwd" [7d2f7d65-082f-42f9-a2e0-4329066b06c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1205 19:03:54.899345  538905 system_pods.go:89] "snapshot-controller-56fcc65765-4kxc6" [b3247ae3-203c-44f6-82e8-ef0144eb6497] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 19:03:54.899353  538905 system_pods.go:89] "snapshot-controller-56fcc65765-7w2w5" [b4d32957-684f-41c3-947a-ddc8a4d8fb33] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 19:03:54.899357  538905 system_pods.go:89] "storage-provisioner" [723d3daa-3e07-4da6-ab13-d88904d4c881] Running
	I1205 19:03:54.899366  538905 system_pods.go:126] duration metric: took 7.149025ms to wait for k8s-apps to be running ...
	I1205 19:03:54.899372  538905 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 19:03:54.899417  538905 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:03:54.914848  538905 system_svc.go:56] duration metric: took 15.466366ms WaitForService to wait for kubelet
	I1205 19:03:54.914878  538905 kubeadm.go:582] duration metric: took 37.776240851s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:03:54.914899  538905 node_conditions.go:102] verifying NodePressure condition ...
	I1205 19:03:54.917651  538905 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 19:03:54.917674  538905 node_conditions.go:123] node cpu capacity is 2
	I1205 19:03:54.917690  538905 node_conditions.go:105] duration metric: took 2.786458ms to run NodePressure ...
	I1205 19:03:54.917706  538905 start.go:241] waiting for startup goroutines ...
	I1205 19:03:55.164312  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:55.164821  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:55.225883  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:55.398837  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:55.665474  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:55.665802  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:55.724424  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:55.898156  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:56.164634  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:56.165246  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:56.225110  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:56.398336  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:56.665300  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:56.665319  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:56.725528  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:56.897992  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:57.165345  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:57.165806  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:57.224360  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:57.397847  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:57.666268  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:57.666639  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:57.725442  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:57.898497  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:58.165644  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:58.165843  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:58.224791  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:58.398326  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:58.665026  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:58.665257  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:58.725043  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:58.898862  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:59.165732  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:59.165988  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:59.266568  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:59.398469  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:59.665292  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:59.665711  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:59.724368  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:59.898445  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:00.164587  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:00.167195  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:00.224703  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:00.398197  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:00.666896  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:00.667969  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:00.725135  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:00.899149  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:01.165709  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:01.166155  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:01.225835  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:01.399175  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:01.665075  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:01.665369  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:01.725434  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:01.898507  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:02.165066  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:02.165359  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:02.225230  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:02.400030  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:02.666599  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:02.666781  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:02.724705  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:02.898481  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:03.165483  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:03.166821  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:03.224523  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:03.397978  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:03.997203  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:03.997786  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:03.998069  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:03.998725  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:04.166828  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:04.166980  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:04.266952  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:04.398158  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:04.671559  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:04.672179  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:04.726431  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:04.898086  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:05.165525  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:05.166502  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:05.225740  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:05.398295  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:05.665295  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:05.665689  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:05.724032  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:05.899217  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:06.166632  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:06.170111  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:06.225314  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:06.398267  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:06.666211  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:06.666807  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:06.725518  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:06.902145  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:07.165700  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:07.165933  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:07.226781  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:07.398071  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:07.666118  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:07.668077  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:07.724402  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:07.898055  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:08.165497  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:08.166641  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:08.229895  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:08.399016  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:08.666242  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:08.666404  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:08.767248  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:08.898544  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:09.164339  538905 kapi.go:107] duration metric: took 42.004068263s to wait for kubernetes.io/minikube-addons=registry ...
	I1205 19:04:09.165390  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:09.225258  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:09.398855  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:09.670280  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:09.769065  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:09.898896  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:10.164923  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:10.225150  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:10.398263  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:10.665538  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:10.725047  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:10.898920  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:11.165432  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:11.225634  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:11.399243  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:11.671241  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:11.725004  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:11.898410  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:12.165562  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:12.225442  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:12.397936  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:12.665118  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:12.724555  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:12.898425  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:13.165883  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:13.226200  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:13.398236  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:14.034612  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:14.135982  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:14.136507  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:14.165041  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:14.224602  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:14.399017  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:14.665928  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:14.725428  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:14.899134  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:15.164860  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:15.225889  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:15.399216  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:15.665319  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:15.725113  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:15.901299  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:16.165566  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:16.225544  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:16.397899  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:16.665199  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:16.725317  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:16.897452  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:17.165569  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:17.224832  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:17.399071  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:17.664950  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:17.725115  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:17.899596  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:18.165694  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:18.225583  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:18.398486  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:18.664612  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:18.725305  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:18.897615  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:19.165462  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:19.225858  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:19.398641  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:19.664752  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:19.724521  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:19.897768  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:20.165474  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:20.267007  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:20.398501  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:20.664825  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:20.725221  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:20.899486  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:21.165586  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:21.266966  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:21.399517  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:21.664898  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:21.724711  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:21.898565  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:22.164990  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:22.224947  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:22.398445  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:22.665832  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:22.725004  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:22.898638  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:23.170481  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:23.227775  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:23.398305  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:23.665830  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:23.725184  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:23.898931  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:24.166543  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:24.225766  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:24.397870  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:24.666191  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:24.724692  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:24.899425  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:25.166799  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:25.267556  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:25.398956  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:25.665028  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:25.727668  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:25.897758  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:26.168078  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:26.224700  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:26.398115  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:26.665501  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:27.066893  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:27.067207  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:27.165308  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:27.225588  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:27.398700  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:27.666283  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:27.725336  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:27.899011  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:28.165639  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:28.225557  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:28.397914  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:28.665286  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:28.724940  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:28.900600  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:29.167471  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:29.270703  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:29.397771  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:29.665673  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:29.725171  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:29.899161  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:30.165255  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:30.224665  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:30.398318  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:30.666706  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:30.724698  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:30.898081  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:31.165143  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:31.224466  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:31.399405  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:31.665641  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:31.724771  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:31.898834  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:32.177744  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:32.228016  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:32.399308  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:32.669092  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:32.728758  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:32.904324  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:33.168403  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:33.225572  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:33.398522  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:33.664422  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:33.724615  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:33.906701  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:34.165670  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:34.266735  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:34.399539  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:34.664853  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:34.728466  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:34.897970  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:35.165033  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:35.225220  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:35.398988  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:35.668994  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:35.771182  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:35.899200  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:36.165409  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:36.225713  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:36.400010  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:36.666213  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:36.724911  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:36.898466  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:37.165988  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:37.224834  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:37.398577  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:37.664930  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:37.724924  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:37.898478  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:38.164386  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:38.225384  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:38.397527  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:38.664678  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:38.725313  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:38.898134  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:39.166334  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:39.267619  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:39.441602  538905 kapi.go:107] duration metric: took 1m9.547108072s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1205 19:04:39.443644  538905 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-396564 cluster.
	I1205 19:04:39.445116  538905 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1205 19:04:39.446702  538905 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1205 19:04:39.669011  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:39.726880  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:40.167397  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:40.273060  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:40.665856  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:40.725491  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:41.165920  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:41.224752  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:41.841987  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:41.845937  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:42.166053  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:42.267570  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:42.668896  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:42.725172  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:43.165563  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:43.225351  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:43.665697  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:43.724972  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:44.165507  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:44.225155  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:44.682526  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:44.725890  538905 kapi.go:107] duration metric: took 1m16.505944912s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1205 19:04:45.165247  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:45.665445  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:46.165901  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:46.664963  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:47.166768  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:47.665295  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:48.165662  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:48.665060  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:49.164465  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:49.665847  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:50.165029  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:50.665470  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:51.166214  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:51.666110  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:52.165955  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:52.665616  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:53.164548  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:53.665363  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:54.166091  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:54.665385  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:55.165481  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:55.665754  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:56.165401  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:56.666064  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:57.165387  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:57.665025  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:58.165053  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:58.665280  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:59.165210  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:59.666525  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:00.165751  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:00.665949  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:01.165765  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:01.665296  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:02.165686  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:02.665470  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:03.165585  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:03.664890  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:04.164960  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:04.664924  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:05.166274  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:05.665557  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:06.166219  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:06.665374  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:07.165595  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:07.665000  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:08.165070  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:08.665589  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:09.164864  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:09.674144  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:10.165463  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:10.665644  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:11.164727  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:11.664817  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:12.165512  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:12.665873  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:13.165154  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:13.665080  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:14.165250  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:14.665248  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:15.164308  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:15.665437  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:16.165253  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:16.665333  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:17.165379  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:17.665128  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:18.165552  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:18.665677  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:19.164813  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:19.665378  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:20.165393  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:20.666191  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:21.164654  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:21.666380  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:22.165923  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:22.665444  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:23.165731  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:23.665878  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:24.165283  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:24.665426  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:25.165191  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:25.665781  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:26.165726  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:26.665719  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:27.164846  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:27.666066  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:28.164694  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:28.665177  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:29.165463  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:29.665549  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:30.165823  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:30.665429  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:31.165351  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:31.665835  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:32.165182  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:32.665500  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:33.165714  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:33.664865  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:34.164570  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:34.665185  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:35.165383  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:35.665724  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:36.166306  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:36.666624  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:37.165304  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:37.665551  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:38.165799  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:38.664772  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:39.164683  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:39.664394  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:40.165415  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:40.668709  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:41.164405  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:41.665845  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:42.166465  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:42.664740  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:43.165133  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:43.666272  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:44.165881  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:44.663967  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:45.166488  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:45.666195  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:46.165093  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:46.666035  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:47.164865  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:47.664737  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:48.164464  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:48.665655  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:49.578460  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:49.665361  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:50.167915  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:50.665652  538905 kapi.go:107] duration metric: took 2m23.505391143s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1205 19:05:50.667912  538905 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, inspektor-gadget, cloud-spanner, amd-gpu-device-plugin, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1205 19:05:50.669483  538905 addons.go:510] duration metric: took 2m33.530805777s for enable addons: enabled=[ingress-dns nvidia-device-plugin inspektor-gadget cloud-spanner amd-gpu-device-plugin storage-provisioner metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1205 19:05:50.669538  538905 start.go:246] waiting for cluster config update ...
	I1205 19:05:50.669559  538905 start.go:255] writing updated cluster config ...
	I1205 19:05:50.669873  538905 ssh_runner.go:195] Run: rm -f paused
	I1205 19:05:50.724736  538905 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 19:05:50.726653  538905 out.go:177] * Done! kubectl is now configured to use "addons-396564" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 19:09:07 addons-396564 crio[665]: time="2024-12-05 19:09:07.213488625Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425747213463087,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595908,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74e2a264-dd30-4202-9438-69aef7dac2f4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:09:07 addons-396564 crio[665]: time="2024-12-05 19:09:07.214157517Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3acb76a-72a8-44f9-b20c-cf761a0668ac name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:09:07 addons-396564 crio[665]: time="2024-12-05 19:09:07.214213372Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3acb76a-72a8-44f9-b20c-cf761a0668ac name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:09:07 addons-396564 crio[665]: time="2024-12-05 19:09:07.214523045Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1420a7c820a076fdfced7aacfe1fccedb6314e31c747d81513ddf7e07b6895c5,PodSandboxId:ec3d843ef852d8683c160e42860bc8d8cdbb361d7ced78dc57559b0dccf91ed6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733425606318379684,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e772fa6-e5dd-49b1-a470-bdca82384b0b,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80305a2941c96368ed3244f100796fdded119c2ef7516e38ba7e3668377e6e57,PodSandboxId:ba639c8e211c850ecc057e571093b6dc0d4934d07509176d09837847b4bb38ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733425554801287054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0ab9fb43-6d1a-4c93-b7a8-53945e058344,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37e67e8e827ee4de8233aca03a3866a434797aa33d345e93f0f727d60a4e1232,PodSandboxId:eda389b40073db914cbcd338a0a10fbacd010503a33136739403402f308ef68d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1733425549784232541,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-88jfh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b20c0178-38d3-419f-8e0c-f10716952335,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:37b8a786d2dc9c9960da54a4277ba66bc9cf4e8b5e6ef11e5d5f79ce2b28f081,PodSandboxId:1acf47365ef2e5b6356d08aa30ebeeb159f9c144af7a14cfd460b3071ca425f9,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1733425482201279637,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5bssn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c195682b-27b3-4d4c-a1e3-6609f9cf0fb3,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8cc5be938c667d5bec6518034616b42c2a9cdbcc72f084ebc17bb04e35f1b20,PodSandboxId:9777d6f18e72329cb03777a5fb06346846122173a802c08b545a67d192b39a5d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733425472433204052,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-w2bgh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c2f11d9d-ca91-4fd2-9bba-3c2016ed8c67,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50d9c2bd4f7c85bc5ebb19c9c273d508e1301d4e34d5e88e109b1981a40a79b,PodSandboxId:0a2f2f63a4115b01d236445efb49e4bdbcf925902d238e9ccd32f703863a2355,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733425444128224343,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-p7wrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aec8457-6ee0-4eeb-9abe-871b30996d06,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b6e3a4fc29407b5843faf06977ce6db4e1a5bbdd36df7bcfc91433c4d9799c,PodSandboxId:dc1450352831648af2fa98196420623803e3dc20de44f6ad344378a1588ee48a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256
:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733425418424303370,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-xcvzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89313f55-0769-4cd7-af1d-e97c6833dcef,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93932d6c4d566cd8b4eb898b5405190bac58abf090434f4d2986742606c49eb4,PodSandboxId:8a3549308ad387ac68610a83abfe57083b1d2d7e2fc68cc6a9d2e616e823818a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:
gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1733425415493801002,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364ca423-ae05-4a12-a6fc-11a86e3213ba,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbae14404a1302fb8ac2eb0ca8137ed78f59059acd980e3980d42a29c87e08f9,PodSandboxId:4e52b6a4649ac22d5
f69e6f436af0c8ed5d609cd11284a474260bbde2bc2960b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733425404286986575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 723d3daa-3e07-4da6-ab13-d88904d4c881,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:789a4b25d853bff2e452046d0bbe30daa5dd750b4815b30aef8774af9201999b,PodSandboxId:dbe32b06c8a1b21a3663371d35b00
1cd90a3d208b335bff5ac6850e86d92421f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733425401627513876,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jz7lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b461df-6acc-4973-9067-3d64d678111c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25818b4b391668a26fc48ea3644968bead720d71a2521a4cf15faef5dcf7db75,PodSandboxId:5cf2096e393a1bfc1f8315d3451d888d501c7de4fcfb2b1a1d55e820e9099042,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733425398081799975,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r9sk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d31a62-b4c2-4d67-801b-a8623f03af65,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:555285d2c5baa83e8a141e31cb63b1ecf7f24747793d17bc184d077aa32d32ff,PodSandboxId:8390342f3267cb0ec0781d40326f871cc206876b0335b89eb5303cce9eddc54b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733425387149945176,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03644608f10a19dcf721eb0920007288,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e16b5c2bcff1353331c9f81ed262d909e8ba9ec868b3ad9d3a34b228ba38fe53,PodSandboxId:0ea5f64294d5938655208e3040270b0bda77dee532116d1f88002dc1c1901133,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733425387132208911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2749c1e2468930ab4ed523429fa7366,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:acbbe2cd3da913dd2fca7ca3cb2016f3e74b87dcffb1f4e7915d49104984e28e,PodSandboxId:eef6ab607009ddac22a66acc75797bc43630ff9d833bbdc333ff952afdd8d37f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733425387097760274,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce9e3111d0ed63fd4adf56f8ae1a972,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id
:d573ee316398f9d872687b34b67057b642f060e9f6be1e2047272f413f522cc6,PodSandboxId:593ce4fc9d33b9231528e59a66954446b8bc8d05ac419e1be5f5caed8bcef141,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733425387066497445,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3feb065f68b065eae6360503f017d29d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=e3acb76a-72a8-44f9-b20c-cf761a0668ac name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:09:07 addons-396564 crio[665]: time="2024-12-05 19:09:07.255410634Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e7e949aa-6c57-4f1a-a906-c2d583ea4ad2 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:09:07 addons-396564 crio[665]: time="2024-12-05 19:09:07.255509371Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e7e949aa-6c57-4f1a-a906-c2d583ea4ad2 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:09:07 addons-396564 crio[665]: time="2024-12-05 19:09:07.256977937Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6a66915c-5bec-44a9-befb-7bf4f1fbdcca name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:09:07 addons-396564 crio[665]: time="2024-12-05 19:09:07.258204606Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425747258177254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595908,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a66915c-5bec-44a9-befb-7bf4f1fbdcca name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:09:07 addons-396564 crio[665]: time="2024-12-05 19:09:07.258882484Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ef15fb41-8b25-456c-99ca-903890e43720 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:09:07 addons-396564 crio[665]: time="2024-12-05 19:09:07.258951669Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ef15fb41-8b25-456c-99ca-903890e43720 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:09:07 addons-396564 crio[665]: time="2024-12-05 19:09:07.259263297Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1420a7c820a076fdfced7aacfe1fccedb6314e31c747d81513ddf7e07b6895c5,PodSandboxId:ec3d843ef852d8683c160e42860bc8d8cdbb361d7ced78dc57559b0dccf91ed6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733425606318379684,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e772fa6-e5dd-49b1-a470-bdca82384b0b,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80305a2941c96368ed3244f100796fdded119c2ef7516e38ba7e3668377e6e57,PodSandboxId:ba639c8e211c850ecc057e571093b6dc0d4934d07509176d09837847b4bb38ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733425554801287054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0ab9fb43-6d1a-4c93-b7a8-53945e058344,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37e67e8e827ee4de8233aca03a3866a434797aa33d345e93f0f727d60a4e1232,PodSandboxId:eda389b40073db914cbcd338a0a10fbacd010503a33136739403402f308ef68d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1733425549784232541,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-88jfh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b20c0178-38d3-419f-8e0c-f10716952335,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:37b8a786d2dc9c9960da54a4277ba66bc9cf4e8b5e6ef11e5d5f79ce2b28f081,PodSandboxId:1acf47365ef2e5b6356d08aa30ebeeb159f9c144af7a14cfd460b3071ca425f9,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1733425482201279637,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5bssn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c195682b-27b3-4d4c-a1e3-6609f9cf0fb3,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8cc5be938c667d5bec6518034616b42c2a9cdbcc72f084ebc17bb04e35f1b20,PodSandboxId:9777d6f18e72329cb03777a5fb06346846122173a802c08b545a67d192b39a5d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733425472433204052,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-w2bgh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c2f11d9d-ca91-4fd2-9bba-3c2016ed8c67,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50d9c2bd4f7c85bc5ebb19c9c273d508e1301d4e34d5e88e109b1981a40a79b,PodSandboxId:0a2f2f63a4115b01d236445efb49e4bdbcf925902d238e9ccd32f703863a2355,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733425444128224343,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-p7wrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aec8457-6ee0-4eeb-9abe-871b30996d06,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b6e3a4fc29407b5843faf06977ce6db4e1a5bbdd36df7bcfc91433c4d9799c,PodSandboxId:dc1450352831648af2fa98196420623803e3dc20de44f6ad344378a1588ee48a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256
:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733425418424303370,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-xcvzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89313f55-0769-4cd7-af1d-e97c6833dcef,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93932d6c4d566cd8b4eb898b5405190bac58abf090434f4d2986742606c49eb4,PodSandboxId:8a3549308ad387ac68610a83abfe57083b1d2d7e2fc68cc6a9d2e616e823818a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:
gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1733425415493801002,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364ca423-ae05-4a12-a6fc-11a86e3213ba,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbae14404a1302fb8ac2eb0ca8137ed78f59059acd980e3980d42a29c87e08f9,PodSandboxId:4e52b6a4649ac22d5
f69e6f436af0c8ed5d609cd11284a474260bbde2bc2960b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733425404286986575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 723d3daa-3e07-4da6-ab13-d88904d4c881,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:789a4b25d853bff2e452046d0bbe30daa5dd750b4815b30aef8774af9201999b,PodSandboxId:dbe32b06c8a1b21a3663371d35b00
1cd90a3d208b335bff5ac6850e86d92421f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733425401627513876,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jz7lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b461df-6acc-4973-9067-3d64d678111c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25818b4b391668a26fc48ea3644968bead720d71a2521a4cf15faef5dcf7db75,PodSandboxId:5cf2096e393a1bfc1f8315d3451d888d501c7de4fcfb2b1a1d55e820e9099042,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733425398081799975,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r9sk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d31a62-b4c2-4d67-801b-a8623f03af65,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:555285d2c5baa83e8a141e31cb63b1ecf7f24747793d17bc184d077aa32d32ff,PodSandboxId:8390342f3267cb0ec0781d40326f871cc206876b0335b89eb5303cce9eddc54b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733425387149945176,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03644608f10a19dcf721eb0920007288,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e16b5c2bcff1353331c9f81ed262d909e8ba9ec868b3ad9d3a34b228ba38fe53,PodSandboxId:0ea5f64294d5938655208e3040270b0bda77dee532116d1f88002dc1c1901133,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733425387132208911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2749c1e2468930ab4ed523429fa7366,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:acbbe2cd3da913dd2fca7ca3cb2016f3e74b87dcffb1f4e7915d49104984e28e,PodSandboxId:eef6ab607009ddac22a66acc75797bc43630ff9d833bbdc333ff952afdd8d37f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733425387097760274,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce9e3111d0ed63fd4adf56f8ae1a972,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id
:d573ee316398f9d872687b34b67057b642f060e9f6be1e2047272f413f522cc6,PodSandboxId:593ce4fc9d33b9231528e59a66954446b8bc8d05ac419e1be5f5caed8bcef141,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733425387066497445,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3feb065f68b065eae6360503f017d29d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=ef15fb41-8b25-456c-99ca-903890e43720 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:09:07 addons-396564 crio[665]: time="2024-12-05 19:09:07.300681398Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=165f4f58-5aaa-42fb-bb6f-66f3f0dacacd name=/runtime.v1.RuntimeService/Version
	Dec 05 19:09:07 addons-396564 crio[665]: time="2024-12-05 19:09:07.300884044Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=165f4f58-5aaa-42fb-bb6f-66f3f0dacacd name=/runtime.v1.RuntimeService/Version
	Dec 05 19:09:07 addons-396564 crio[665]: time="2024-12-05 19:09:07.302244805Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5f6f15e4-89ef-4729-a673-5f4c42135bdc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:09:07 addons-396564 crio[665]: time="2024-12-05 19:09:07.303454823Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425747303423490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595908,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5f6f15e4-89ef-4729-a673-5f4c42135bdc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:09:07 addons-396564 crio[665]: time="2024-12-05 19:09:07.304144054Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7095c81-471a-46a0-ac6a-dbd33240f0f3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:09:07 addons-396564 crio[665]: time="2024-12-05 19:09:07.304233664Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7095c81-471a-46a0-ac6a-dbd33240f0f3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:09:07 addons-396564 crio[665]: time="2024-12-05 19:09:07.304576097Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1420a7c820a076fdfced7aacfe1fccedb6314e31c747d81513ddf7e07b6895c5,PodSandboxId:ec3d843ef852d8683c160e42860bc8d8cdbb361d7ced78dc57559b0dccf91ed6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733425606318379684,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e772fa6-e5dd-49b1-a470-bdca82384b0b,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80305a2941c96368ed3244f100796fdded119c2ef7516e38ba7e3668377e6e57,PodSandboxId:ba639c8e211c850ecc057e571093b6dc0d4934d07509176d09837847b4bb38ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733425554801287054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0ab9fb43-6d1a-4c93-b7a8-53945e058344,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37e67e8e827ee4de8233aca03a3866a434797aa33d345e93f0f727d60a4e1232,PodSandboxId:eda389b40073db914cbcd338a0a10fbacd010503a33136739403402f308ef68d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1733425549784232541,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-88jfh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b20c0178-38d3-419f-8e0c-f10716952335,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:37b8a786d2dc9c9960da54a4277ba66bc9cf4e8b5e6ef11e5d5f79ce2b28f081,PodSandboxId:1acf47365ef2e5b6356d08aa30ebeeb159f9c144af7a14cfd460b3071ca425f9,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1733425482201279637,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5bssn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c195682b-27b3-4d4c-a1e3-6609f9cf0fb3,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8cc5be938c667d5bec6518034616b42c2a9cdbcc72f084ebc17bb04e35f1b20,PodSandboxId:9777d6f18e72329cb03777a5fb06346846122173a802c08b545a67d192b39a5d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733425472433204052,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-w2bgh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c2f11d9d-ca91-4fd2-9bba-3c2016ed8c67,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50d9c2bd4f7c85bc5ebb19c9c273d508e1301d4e34d5e88e109b1981a40a79b,PodSandboxId:0a2f2f63a4115b01d236445efb49e4bdbcf925902d238e9ccd32f703863a2355,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733425444128224343,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-p7wrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aec8457-6ee0-4eeb-9abe-871b30996d06,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b6e3a4fc29407b5843faf06977ce6db4e1a5bbdd36df7bcfc91433c4d9799c,PodSandboxId:dc1450352831648af2fa98196420623803e3dc20de44f6ad344378a1588ee48a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256
:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733425418424303370,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-xcvzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89313f55-0769-4cd7-af1d-e97c6833dcef,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93932d6c4d566cd8b4eb898b5405190bac58abf090434f4d2986742606c49eb4,PodSandboxId:8a3549308ad387ac68610a83abfe57083b1d2d7e2fc68cc6a9d2e616e823818a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:
gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1733425415493801002,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364ca423-ae05-4a12-a6fc-11a86e3213ba,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbae14404a1302fb8ac2eb0ca8137ed78f59059acd980e3980d42a29c87e08f9,PodSandboxId:4e52b6a4649ac22d5
f69e6f436af0c8ed5d609cd11284a474260bbde2bc2960b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733425404286986575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 723d3daa-3e07-4da6-ab13-d88904d4c881,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:789a4b25d853bff2e452046d0bbe30daa5dd750b4815b30aef8774af9201999b,PodSandboxId:dbe32b06c8a1b21a3663371d35b00
1cd90a3d208b335bff5ac6850e86d92421f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733425401627513876,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jz7lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b461df-6acc-4973-9067-3d64d678111c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25818b4b391668a26fc48ea3644968bead720d71a2521a4cf15faef5dcf7db75,PodSandboxId:5cf2096e393a1bfc1f8315d3451d888d501c7de4fcfb2b1a1d55e820e9099042,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733425398081799975,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r9sk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d31a62-b4c2-4d67-801b-a8623f03af65,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:555285d2c5baa83e8a141e31cb63b1ecf7f24747793d17bc184d077aa32d32ff,PodSandboxId:8390342f3267cb0ec0781d40326f871cc206876b0335b89eb5303cce9eddc54b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733425387149945176,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03644608f10a19dcf721eb0920007288,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e16b5c2bcff1353331c9f81ed262d909e8ba9ec868b3ad9d3a34b228ba38fe53,PodSandboxId:0ea5f64294d5938655208e3040270b0bda77dee532116d1f88002dc1c1901133,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733425387132208911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2749c1e2468930ab4ed523429fa7366,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:acbbe2cd3da913dd2fca7ca3cb2016f3e74b87dcffb1f4e7915d49104984e28e,PodSandboxId:eef6ab607009ddac22a66acc75797bc43630ff9d833bbdc333ff952afdd8d37f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733425387097760274,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce9e3111d0ed63fd4adf56f8ae1a972,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id
:d573ee316398f9d872687b34b67057b642f060e9f6be1e2047272f413f522cc6,PodSandboxId:593ce4fc9d33b9231528e59a66954446b8bc8d05ac419e1be5f5caed8bcef141,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733425387066497445,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3feb065f68b065eae6360503f017d29d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=a7095c81-471a-46a0-ac6a-dbd33240f0f3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:09:07 addons-396564 crio[665]: time="2024-12-05 19:09:07.340069328Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c0885d25-8acf-4144-a09c-bdecdc91c8ae name=/runtime.v1.RuntimeService/Version
	Dec 05 19:09:07 addons-396564 crio[665]: time="2024-12-05 19:09:07.340165397Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c0885d25-8acf-4144-a09c-bdecdc91c8ae name=/runtime.v1.RuntimeService/Version
	Dec 05 19:09:07 addons-396564 crio[665]: time="2024-12-05 19:09:07.341615636Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9d70c303-a08b-4c26-913e-6e70b30e60a5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:09:07 addons-396564 crio[665]: time="2024-12-05 19:09:07.346415579Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425747346307903,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595908,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9d70c303-a08b-4c26-913e-6e70b30e60a5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:09:07 addons-396564 crio[665]: time="2024-12-05 19:09:07.349626903Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=40ef5591-9c5d-4459-ae2a-21733a20b13c name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:09:07 addons-396564 crio[665]: time="2024-12-05 19:09:07.349679642Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=40ef5591-9c5d-4459-ae2a-21733a20b13c name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:09:07 addons-396564 crio[665]: time="2024-12-05 19:09:07.350044374Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1420a7c820a076fdfced7aacfe1fccedb6314e31c747d81513ddf7e07b6895c5,PodSandboxId:ec3d843ef852d8683c160e42860bc8d8cdbb361d7ced78dc57559b0dccf91ed6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733425606318379684,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e772fa6-e5dd-49b1-a470-bdca82384b0b,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80305a2941c96368ed3244f100796fdded119c2ef7516e38ba7e3668377e6e57,PodSandboxId:ba639c8e211c850ecc057e571093b6dc0d4934d07509176d09837847b4bb38ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733425554801287054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0ab9fb43-6d1a-4c93-b7a8-53945e058344,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37e67e8e827ee4de8233aca03a3866a434797aa33d345e93f0f727d60a4e1232,PodSandboxId:eda389b40073db914cbcd338a0a10fbacd010503a33136739403402f308ef68d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1733425549784232541,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-88jfh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b20c0178-38d3-419f-8e0c-f10716952335,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:37b8a786d2dc9c9960da54a4277ba66bc9cf4e8b5e6ef11e5d5f79ce2b28f081,PodSandboxId:1acf47365ef2e5b6356d08aa30ebeeb159f9c144af7a14cfd460b3071ca425f9,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1733425482201279637,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5bssn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c195682b-27b3-4d4c-a1e3-6609f9cf0fb3,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8cc5be938c667d5bec6518034616b42c2a9cdbcc72f084ebc17bb04e35f1b20,PodSandboxId:9777d6f18e72329cb03777a5fb06346846122173a802c08b545a67d192b39a5d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733425472433204052,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-w2bgh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c2f11d9d-ca91-4fd2-9bba-3c2016ed8c67,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50d9c2bd4f7c85bc5ebb19c9c273d508e1301d4e34d5e88e109b1981a40a79b,PodSandboxId:0a2f2f63a4115b01d236445efb49e4bdbcf925902d238e9ccd32f703863a2355,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733425444128224343,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-p7wrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aec8457-6ee0-4eeb-9abe-871b30996d06,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b6e3a4fc29407b5843faf06977ce6db4e1a5bbdd36df7bcfc91433c4d9799c,PodSandboxId:dc1450352831648af2fa98196420623803e3dc20de44f6ad344378a1588ee48a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256
:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733425418424303370,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-xcvzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89313f55-0769-4cd7-af1d-e97c6833dcef,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93932d6c4d566cd8b4eb898b5405190bac58abf090434f4d2986742606c49eb4,PodSandboxId:8a3549308ad387ac68610a83abfe57083b1d2d7e2fc68cc6a9d2e616e823818a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:
gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1733425415493801002,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364ca423-ae05-4a12-a6fc-11a86e3213ba,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbae14404a1302fb8ac2eb0ca8137ed78f59059acd980e3980d42a29c87e08f9,PodSandboxId:4e52b6a4649ac22d5
f69e6f436af0c8ed5d609cd11284a474260bbde2bc2960b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733425404286986575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 723d3daa-3e07-4da6-ab13-d88904d4c881,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:789a4b25d853bff2e452046d0bbe30daa5dd750b4815b30aef8774af9201999b,PodSandboxId:dbe32b06c8a1b21a3663371d35b00
1cd90a3d208b335bff5ac6850e86d92421f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733425401627513876,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jz7lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b461df-6acc-4973-9067-3d64d678111c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25818b4b391668a26fc48ea3644968bead720d71a2521a4cf15faef5dcf7db75,PodSandboxId:5cf2096e393a1bfc1f8315d3451d888d501c7de4fcfb2b1a1d55e820e9099042,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733425398081799975,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r9sk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d31a62-b4c2-4d67-801b-a8623f03af65,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:555285d2c5baa83e8a141e31cb63b1ecf7f24747793d17bc184d077aa32d32ff,PodSandboxId:8390342f3267cb0ec0781d40326f871cc206876b0335b89eb5303cce9eddc54b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733425387149945176,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03644608f10a19dcf721eb0920007288,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e16b5c2bcff1353331c9f81ed262d909e8ba9ec868b3ad9d3a34b228ba38fe53,PodSandboxId:0ea5f64294d5938655208e3040270b0bda77dee532116d1f88002dc1c1901133,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733425387132208911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2749c1e2468930ab4ed523429fa7366,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:acbbe2cd3da913dd2fca7ca3cb2016f3e74b87dcffb1f4e7915d49104984e28e,PodSandboxId:eef6ab607009ddac22a66acc75797bc43630ff9d833bbdc333ff952afdd8d37f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733425387097760274,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce9e3111d0ed63fd4adf56f8ae1a972,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id
:d573ee316398f9d872687b34b67057b642f060e9f6be1e2047272f413f522cc6,PodSandboxId:593ce4fc9d33b9231528e59a66954446b8bc8d05ac419e1be5f5caed8bcef141,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733425387066497445,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3feb065f68b065eae6360503f017d29d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=40ef5591-9c5d-4459-ae2a-21733a20b13c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1420a7c820a07       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                              2 minutes ago       Running             nginx                     0                   ec3d843ef852d       nginx
	80305a2941c96       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   ba639c8e211c8       busybox
	37e67e8e827ee       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   eda389b40073d       ingress-nginx-controller-5f85ff4588-88jfh
	37b8a786d2dc9       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             4 minutes ago       Exited              patch                     2                   1acf47365ef2e       ingress-nginx-admission-patch-5bssn
	c8cc5be938c66       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago       Exited              create                    0                   9777d6f18e723       ingress-nginx-admission-create-w2bgh
	a50d9c2bd4f7c       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        5 minutes ago       Running             metrics-server            0                   0a2f2f63a4115       metrics-server-84c5f94fbc-p7wrj
	29b6e3a4fc294       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   dc14503528316       amd-gpu-device-plugin-xcvzc
	93932d6c4d566       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             5 minutes ago       Running             minikube-ingress-dns      0                   8a3549308ad38       kube-ingress-dns-minikube
	dbae14404a130       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   4e52b6a4649ac       storage-provisioner
	789a4b25d853b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             5 minutes ago       Running             coredns                   0                   dbe32b06c8a1b       coredns-7c65d6cfc9-jz7lb
	25818b4b39166       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                             5 minutes ago       Running             kube-proxy                0                   5cf2096e393a1       kube-proxy-r9sk8
	555285d2c5baa       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                             6 minutes ago       Running             kube-controller-manager   0                   8390342f3267c       kube-controller-manager-addons-396564
	e16b5c2bcff13       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             6 minutes ago       Running             etcd                      0                   0ea5f64294d59       etcd-addons-396564
	acbbe2cd3da91       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                             6 minutes ago       Running             kube-scheduler            0                   eef6ab607009d       kube-scheduler-addons-396564
	d573ee316398f       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                             6 minutes ago       Running             kube-apiserver            0                   593ce4fc9d33b       kube-apiserver-addons-396564
	
	
	==> coredns [789a4b25d853bff2e452046d0bbe30daa5dd750b4815b30aef8774af9201999b] <==
	[INFO] 10.244.0.9:53006 - 39308 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.002843561s
	[INFO] 10.244.0.9:53006 - 2213 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00012954s
	[INFO] 10.244.0.9:53006 - 14882 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000075342s
	[INFO] 10.244.0.9:53006 - 38590 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000162003s
	[INFO] 10.244.0.9:53006 - 36027 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000064132s
	[INFO] 10.244.0.9:53006 - 45322 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000085013s
	[INFO] 10.244.0.9:53006 - 31213 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000181085s
	[INFO] 10.244.0.9:43554 - 46818 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000098782s
	[INFO] 10.244.0.9:43554 - 46535 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000063421s
	[INFO] 10.244.0.9:55709 - 42431 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000072788s
	[INFO] 10.244.0.9:55709 - 42182 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000035584s
	[INFO] 10.244.0.9:60179 - 51156 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000053978s
	[INFO] 10.244.0.9:60179 - 50727 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000055856s
	[INFO] 10.244.0.9:51165 - 13801 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000058298s
	[INFO] 10.244.0.9:51165 - 13977 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000052216s
	[INFO] 10.244.0.22:43992 - 43615 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0006668s
	[INFO] 10.244.0.22:53951 - 56383 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000084607s
	[INFO] 10.244.0.22:44484 - 11268 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000149089s
	[INFO] 10.244.0.22:41325 - 14751 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000105736s
	[INFO] 10.244.0.22:42478 - 14674 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000112638s
	[INFO] 10.244.0.22:38021 - 29189 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000057025s
	[INFO] 10.244.0.22:40421 - 36024 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000719676s
	[INFO] 10.244.0.22:57496 - 53057 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001304661s
	[INFO] 10.244.0.27:38306 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000478503s
	[INFO] 10.244.0.27:45324 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000293412s
	
	
	==> describe nodes <==
	Name:               addons-396564
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-396564
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=addons-396564
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T19_03_12_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-396564
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 19:03:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-396564
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 19:08:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 19:07:18 +0000   Thu, 05 Dec 2024 19:03:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 19:07:18 +0000   Thu, 05 Dec 2024 19:03:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 19:07:18 +0000   Thu, 05 Dec 2024 19:03:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 19:07:18 +0000   Thu, 05 Dec 2024 19:03:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.9
	  Hostname:    addons-396564
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 8d7365bcf3de43d58534ecb48390e7f3
	  System UUID:                8d7365bc-f3de-43d5-8534-ecb48390e7f3
	  Boot ID:                    2e6976b8-6ce2-402d-812d-ae00122d3fd1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m16s
	  default                     hello-world-app-55bf9c44b4-z824g             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-88jfh    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m41s
	  kube-system                 amd-gpu-device-plugin-xcvzc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m48s
	  kube-system                 coredns-7c65d6cfc9-jz7lb                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m50s
	  kube-system                 etcd-addons-396564                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m56s
	  kube-system                 kube-apiserver-addons-396564                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 kube-controller-manager-addons-396564        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m46s
	  kube-system                 kube-proxy-r9sk8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 kube-scheduler-addons-396564                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 metrics-server-84c5f94fbc-p7wrj              100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         5m44s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m48s                kube-proxy       
	  Normal  Starting                 6m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m1s (x8 over 6m1s)  kubelet          Node addons-396564 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m1s (x8 over 6m1s)  kubelet          Node addons-396564 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m1s (x7 over 6m1s)  kubelet          Node addons-396564 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m55s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m55s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m55s                kubelet          Node addons-396564 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m55s                kubelet          Node addons-396564 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m55s                kubelet          Node addons-396564 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m54s                kubelet          Node addons-396564 status is now: NodeReady
	  Normal  RegisteredNode           5m51s                node-controller  Node addons-396564 event: Registered Node addons-396564 in Controller
	
	
	==> dmesg <==
	[  +0.080739] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.293286] systemd-fstab-generator[1346]: Ignoring "noauto" option for root device
	[  +0.152245] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.040200] kauditd_printk_skb: 118 callbacks suppressed
	[  +5.128557] kauditd_printk_skb: 137 callbacks suppressed
	[  +7.941395] kauditd_printk_skb: 87 callbacks suppressed
	[Dec 5 19:04] kauditd_printk_skb: 4 callbacks suppressed
	[ +10.367507] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.686741] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.110383] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.006668] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.616997] kauditd_printk_skb: 11 callbacks suppressed
	[Dec 5 19:05] kauditd_printk_skb: 14 callbacks suppressed
	[ +36.029991] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.022704] kauditd_printk_skb: 11 callbacks suppressed
	[Dec 5 19:06] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.138369] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.088802] kauditd_printk_skb: 52 callbacks suppressed
	[  +5.180590] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.639910] kauditd_printk_skb: 59 callbacks suppressed
	[  +8.518882] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.665826] kauditd_printk_skb: 23 callbacks suppressed
	[Dec 5 19:07] kauditd_printk_skb: 6 callbacks suppressed
	[  +8.870219] kauditd_printk_skb: 7 callbacks suppressed
	[Dec 5 19:09] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [e16b5c2bcff1353331c9f81ed262d909e8ba9ec868b3ad9d3a34b228ba38fe53] <==
	{"level":"info","ts":"2024-12-05T19:04:41.749782Z","caller":"traceutil/trace.go:171","msg":"trace[47050269] linearizableReadLoop","detail":"{readStateIndex:1168; appliedIndex:1168; }","duration":"393.219427ms","start":"2024-12-05T19:04:41.356547Z","end":"2024-12-05T19:04:41.749767Z","steps":["trace[47050269] 'read index received'  (duration: 393.211648ms)","trace[47050269] 'applied index is now lower than readState.Index'  (duration: 6.962µs)"],"step_count":2}
	{"level":"warn","ts":"2024-12-05T19:04:41.749936Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"393.379814ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.9\" ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2024-12-05T19:04:41.750009Z","caller":"traceutil/trace.go:171","msg":"trace[1127141615] range","detail":"{range_begin:/registry/masterleases/192.168.39.9; range_end:; response_count:1; response_revision:1134; }","duration":"393.459568ms","start":"2024-12-05T19:04:41.356542Z","end":"2024-12-05T19:04:41.750001Z","steps":["trace[1127141615] 'agreement among raft nodes before linearized reading'  (duration: 393.303173ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:04:41.750030Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T19:04:41.356500Z","time spent":"393.524767ms","remote":"127.0.0.1:36586","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":1,"response size":155,"request content":"key:\"/registry/masterleases/192.168.39.9\" "}
	{"level":"info","ts":"2024-12-05T19:04:41.822114Z","caller":"traceutil/trace.go:171","msg":"trace[414069872] transaction","detail":"{read_only:false; response_revision:1135; number_of_response:1; }","duration":"315.482265ms","start":"2024-12-05T19:04:41.506617Z","end":"2024-12-05T19:04:41.822100Z","steps":["trace[414069872] 'process raft request'  (duration: 311.360036ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:04:41.823203Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T19:04:41.506581Z","time spent":"316.511236ms","remote":"127.0.0.1:39538","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-k6bflfeh4ottzntjhcieubcdvm\" mod_revision:1049 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-k6bflfeh4ottzntjhcieubcdvm\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-k6bflfeh4ottzntjhcieubcdvm\" > >"}
	{"level":"warn","ts":"2024-12-05T19:04:41.824994Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"447.1684ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T19:04:41.825037Z","caller":"traceutil/trace.go:171","msg":"trace[1278542418] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1135; }","duration":"447.215587ms","start":"2024-12-05T19:04:41.377810Z","end":"2024-12-05T19:04:41.825026Z","steps":["trace[1278542418] 'agreement among raft nodes before linearized reading'  (duration: 447.14086ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:04:41.825067Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T19:04:41.377770Z","time spent":"447.290337ms","remote":"127.0.0.1:36536","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-12-05T19:04:41.825215Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.0302ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T19:04:41.825243Z","caller":"traceutil/trace.go:171","msg":"trace[406275605] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1135; }","duration":"114.061839ms","start":"2024-12-05T19:04:41.711173Z","end":"2024-12-05T19:04:41.825235Z","steps":["trace[406275605] 'agreement among raft nodes before linearized reading'  (duration: 113.999526ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:04:41.825323Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"174.381181ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T19:04:41.825347Z","caller":"traceutil/trace.go:171","msg":"trace[1268234780] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1135; }","duration":"174.408621ms","start":"2024-12-05T19:04:41.650932Z","end":"2024-12-05T19:04:41.825341Z","steps":["trace[1268234780] 'agreement among raft nodes before linearized reading'  (duration: 174.361258ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T19:05:46.973365Z","caller":"traceutil/trace.go:171","msg":"trace[1518702504] linearizableReadLoop","detail":"{readStateIndex:1311; appliedIndex:1310; }","duration":"185.554163ms","start":"2024-12-05T19:05:46.787577Z","end":"2024-12-05T19:05:46.973131Z","steps":["trace[1518702504] 'read index received'  (duration: 184.850581ms)","trace[1518702504] 'applied index is now lower than readState.Index'  (duration: 702.525µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-05T19:05:46.974023Z","caller":"traceutil/trace.go:171","msg":"trace[1898178183] transaction","detail":"{read_only:false; response_revision:1261; number_of_response:1; }","duration":"290.500968ms","start":"2024-12-05T19:05:46.683313Z","end":"2024-12-05T19:05:46.973814Z","steps":["trace[1898178183] 'process raft request'  (duration: 288.983657ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:05:49.558286Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"408.925538ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T19:05:49.558417Z","caller":"traceutil/trace.go:171","msg":"trace[2142991964] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1263; }","duration":"409.132402ms","start":"2024-12-05T19:05:49.149262Z","end":"2024-12-05T19:05:49.558394Z","steps":["trace[2142991964] 'range keys from in-memory index tree'  (duration: 408.878097ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:05:49.558483Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T19:05:49.149229Z","time spent":"409.22915ms","remote":"127.0.0.1:39460","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-12-05T19:05:49.558691Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"375.60697ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T19:05:49.558797Z","caller":"traceutil/trace.go:171","msg":"trace[2018104964] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1263; }","duration":"375.722755ms","start":"2024-12-05T19:05:49.183064Z","end":"2024-12-05T19:05:49.558787Z","steps":["trace[2018104964] 'range keys from in-memory index tree'  (duration: 375.597926ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:05:49.558991Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.840937ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T19:05:49.559042Z","caller":"traceutil/trace.go:171","msg":"trace[831807354] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1263; }","duration":"186.894772ms","start":"2024-12-05T19:05:49.372139Z","end":"2024-12-05T19:05:49.559034Z","steps":["trace[831807354] 'range keys from in-memory index tree'  (duration: 186.767015ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T19:06:16.607245Z","caller":"traceutil/trace.go:171","msg":"trace[354493794] transaction","detail":"{read_only:false; response_revision:1414; number_of_response:1; }","duration":"185.937952ms","start":"2024-12-05T19:06:16.421291Z","end":"2024-12-05T19:06:16.607229Z","steps":["trace[354493794] 'process raft request'  (duration: 185.822705ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:07:12.039299Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"208.132289ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T19:07:12.039462Z","caller":"traceutil/trace.go:171","msg":"trace[1689487960] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1788; }","duration":"208.356664ms","start":"2024-12-05T19:07:11.831085Z","end":"2024-12-05T19:07:12.039442Z","steps":["trace[1689487960] 'range keys from in-memory index tree'  (duration: 207.973824ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:09:07 up 6 min,  0 users,  load average: 0.62, 1.05, 0.55
	Linux addons-396564 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d573ee316398f9d872687b34b67057b642f060e9f6be1e2047272f413f522cc6] <==
	E1205 19:05:13.782449       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.188.101:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.188.101:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.188.101:443: connect: connection refused" logger="UnhandledError"
	E1205 19:05:13.788179       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.188.101:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.188.101:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.188.101:443: connect: connection refused" logger="UnhandledError"
	I1205 19:05:13.862182       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1205 19:06:01.500586       1 conn.go:339] Error on socket receive: read tcp 192.168.39.9:8443->192.168.39.1:37926: use of closed network connection
	I1205 19:06:10.947111       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.149.211"}
	I1205 19:06:35.718353       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1205 19:06:36.762903       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1205 19:06:41.250682       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1205 19:06:41.487034       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.87.93"}
	E1205 19:06:45.893129       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1205 19:06:59.827550       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1205 19:07:19.373128       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:07:19.373193       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:07:19.402014       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:07:19.402129       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:07:19.414914       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:07:19.415022       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:07:19.422133       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:07:19.422193       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:07:19.448376       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:07:19.448423       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1205 19:07:20.415440       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1205 19:07:20.449207       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1205 19:07:20.592245       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1205 19:09:06.135256       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.144.164"}
	
	
	==> kube-controller-manager [555285d2c5baa83e8a141e31cb63b1ecf7f24747793d17bc184d077aa32d32ff] <==
	E1205 19:07:39.954758       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1205 19:07:46.549230       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1205 19:07:46.549432       1 shared_informer.go:320] Caches are synced for resource quota
	I1205 19:07:47.058508       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1205 19:07:47.058553       1 shared_informer.go:320] Caches are synced for garbage collector
	W1205 19:07:59.269193       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:07:59.269249       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:07:59.632040       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:07:59.632138       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:08:01.002374       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:08:01.002429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:08:03.000151       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:08:03.000247       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:08:31.706921       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:08:31.707132       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:08:36.862066       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:08:36.862189       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:08:38.737077       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:08:38.737137       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:08:46.933232       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:08:46.933469       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1205 19:09:05.944796       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="45.72411ms"
	I1205 19:09:05.963050       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="14.656867ms"
	I1205 19:09:05.999902       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="36.678289ms"
	I1205 19:09:06.000089       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="55.431µs"
	
	
	==> kube-proxy [25818b4b391668a26fc48ea3644968bead720d71a2521a4cf15faef5dcf7db75] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 19:03:18.941178       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1205 19:03:18.961226       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.9"]
	E1205 19:03:18.961321       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 19:03:19.050961       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 19:03:19.051016       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 19:03:19.051049       1 server_linux.go:169] "Using iptables Proxier"
	I1205 19:03:19.053864       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 19:03:19.054077       1 server.go:483] "Version info" version="v1.31.2"
	I1205 19:03:19.054106       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 19:03:19.055402       1 config.go:199] "Starting service config controller"
	I1205 19:03:19.055445       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 19:03:19.055473       1 config.go:105] "Starting endpoint slice config controller"
	I1205 19:03:19.055495       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 19:03:19.056166       1 config.go:328] "Starting node config controller"
	I1205 19:03:19.056200       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 19:03:19.155774       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 19:03:19.155867       1 shared_informer.go:320] Caches are synced for service config
	I1205 19:03:19.156356       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [acbbe2cd3da913dd2fca7ca3cb2016f3e74b87dcffb1f4e7915d49104984e28e] <==
	W1205 19:03:10.457877       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 19:03:10.458069       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:10.561384       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 19:03:10.561444       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:10.625864       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1205 19:03:10.625904       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:10.628395       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 19:03:10.628596       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1205 19:03:10.631679       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 19:03:10.631979       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:10.631881       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1205 19:03:10.632148       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:10.703172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 19:03:10.703379       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:10.744779       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 19:03:10.744910       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:10.756458       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 19:03:10.756581       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:10.760280       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1205 19:03:10.760469       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:10.858309       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 19:03:10.858452       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:10.867401       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 19:03:10.867542       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1205 19:03:12.778371       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 05 19:09:02 addons-396564 kubelet[1218]: E1205 19:09:02.322754    1218 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425742322223603,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595908,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:09:02 addons-396564 kubelet[1218]: E1205 19:09:02.323025    1218 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425742322223603,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595908,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:09:05 addons-396564 kubelet[1218]: E1205 19:09:05.940378    1218 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5be510d8-669e-43b3-9429-cfb59274f96d" containerName="hostpath"
	Dec 05 19:09:05 addons-396564 kubelet[1218]: E1205 19:09:05.940871    1218 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5be510d8-669e-43b3-9429-cfb59274f96d" containerName="csi-provisioner"
	Dec 05 19:09:05 addons-396564 kubelet[1218]: E1205 19:09:05.940934    1218 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5be510d8-669e-43b3-9429-cfb59274f96d" containerName="csi-external-health-monitor-controller"
	Dec 05 19:09:05 addons-396564 kubelet[1218]: E1205 19:09:05.940980    1218 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5be510d8-669e-43b3-9429-cfb59274f96d" containerName="node-driver-registrar"
	Dec 05 19:09:05 addons-396564 kubelet[1218]: E1205 19:09:05.941015    1218 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5be510d8-669e-43b3-9429-cfb59274f96d" containerName="liveness-probe"
	Dec 05 19:09:05 addons-396564 kubelet[1218]: E1205 19:09:05.941057    1218 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="432c94bf-2efd-467c-95cb-1aa632b845cc" containerName="csi-resizer"
	Dec 05 19:09:05 addons-396564 kubelet[1218]: E1205 19:09:05.941111    1218 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8e9fadd5-acf2-477a-9d62-c47987d16129" containerName="csi-attacher"
	Dec 05 19:09:05 addons-396564 kubelet[1218]: E1205 19:09:05.941152    1218 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b3247ae3-203c-44f6-82e8-ef0144eb6497" containerName="volume-snapshot-controller"
	Dec 05 19:09:05 addons-396564 kubelet[1218]: E1205 19:09:05.941242    1218 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b4d32957-684f-41c3-947a-ddc8a4d8fb33" containerName="volume-snapshot-controller"
	Dec 05 19:09:05 addons-396564 kubelet[1218]: E1205 19:09:05.941277    1218 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5be510d8-669e-43b3-9429-cfb59274f96d" containerName="csi-snapshotter"
	Dec 05 19:09:05 addons-396564 kubelet[1218]: E1205 19:09:05.941311    1218 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="60b35f80-a8db-470b-9a72-02fa40c95cdc" containerName="task-pv-container"
	Dec 05 19:09:05 addons-396564 kubelet[1218]: I1205 19:09:05.941424    1218 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3247ae3-203c-44f6-82e8-ef0144eb6497" containerName="volume-snapshot-controller"
	Dec 05 19:09:05 addons-396564 kubelet[1218]: I1205 19:09:05.941462    1218 memory_manager.go:354] "RemoveStaleState removing state" podUID="60b35f80-a8db-470b-9a72-02fa40c95cdc" containerName="task-pv-container"
	Dec 05 19:09:05 addons-396564 kubelet[1218]: I1205 19:09:05.941494    1218 memory_manager.go:354] "RemoveStaleState removing state" podUID="5be510d8-669e-43b3-9429-cfb59274f96d" containerName="csi-external-health-monitor-controller"
	Dec 05 19:09:05 addons-396564 kubelet[1218]: I1205 19:09:05.941528    1218 memory_manager.go:354] "RemoveStaleState removing state" podUID="5be510d8-669e-43b3-9429-cfb59274f96d" containerName="node-driver-registrar"
	Dec 05 19:09:05 addons-396564 kubelet[1218]: I1205 19:09:05.941561    1218 memory_manager.go:354] "RemoveStaleState removing state" podUID="5be510d8-669e-43b3-9429-cfb59274f96d" containerName="hostpath"
	Dec 05 19:09:05 addons-396564 kubelet[1218]: I1205 19:09:05.941604    1218 memory_manager.go:354] "RemoveStaleState removing state" podUID="432c94bf-2efd-467c-95cb-1aa632b845cc" containerName="csi-resizer"
	Dec 05 19:09:05 addons-396564 kubelet[1218]: I1205 19:09:05.941637    1218 memory_manager.go:354] "RemoveStaleState removing state" podUID="5be510d8-669e-43b3-9429-cfb59274f96d" containerName="liveness-probe"
	Dec 05 19:09:05 addons-396564 kubelet[1218]: I1205 19:09:05.941676    1218 memory_manager.go:354] "RemoveStaleState removing state" podUID="5be510d8-669e-43b3-9429-cfb59274f96d" containerName="csi-provisioner"
	Dec 05 19:09:05 addons-396564 kubelet[1218]: I1205 19:09:05.941767    1218 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e9fadd5-acf2-477a-9d62-c47987d16129" containerName="csi-attacher"
	Dec 05 19:09:05 addons-396564 kubelet[1218]: I1205 19:09:05.941802    1218 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4d32957-684f-41c3-947a-ddc8a4d8fb33" containerName="volume-snapshot-controller"
	Dec 05 19:09:05 addons-396564 kubelet[1218]: I1205 19:09:05.941836    1218 memory_manager.go:354] "RemoveStaleState removing state" podUID="5be510d8-669e-43b3-9429-cfb59274f96d" containerName="csi-snapshotter"
	Dec 05 19:09:06 addons-396564 kubelet[1218]: I1205 19:09:06.051264    1218 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flk6r\" (UniqueName: \"kubernetes.io/projected/a3b29e45-9d9f-400e-ac12-8846b47d56a4-kube-api-access-flk6r\") pod \"hello-world-app-55bf9c44b4-z824g\" (UID: \"a3b29e45-9d9f-400e-ac12-8846b47d56a4\") " pod="default/hello-world-app-55bf9c44b4-z824g"
	
	
	==> storage-provisioner [dbae14404a1302fb8ac2eb0ca8137ed78f59059acd980e3980d42a29c87e08f9] <==
	I1205 19:03:24.754564       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 19:03:24.776398       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 19:03:24.776485       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 19:03:24.794036       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 19:03:24.794225       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-396564_929486ec-e5da-4ae8-9917-781360e96da1!
	I1205 19:03:24.795194       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"564c2f78-3944-4585-98ef-beb3cd4944d8", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-396564_929486ec-e5da-4ae8-9917-781360e96da1 became leader
	I1205 19:03:24.894417       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-396564_929486ec-e5da-4ae8-9917-781360e96da1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-396564 -n addons-396564
helpers_test.go:261: (dbg) Run:  kubectl --context addons-396564 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-55bf9c44b4-z824g ingress-nginx-admission-create-w2bgh ingress-nginx-admission-patch-5bssn
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-396564 describe pod hello-world-app-55bf9c44b4-z824g ingress-nginx-admission-create-w2bgh ingress-nginx-admission-patch-5bssn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-396564 describe pod hello-world-app-55bf9c44b4-z824g ingress-nginx-admission-create-w2bgh ingress-nginx-admission-patch-5bssn: exit status 1 (71.232564ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-55bf9c44b4-z824g
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-396564/192.168.39.9
	Start Time:       Thu, 05 Dec 2024 19:09:05 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-flk6r (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-flk6r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-z824g to addons-396564
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-w2bgh" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-5bssn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-396564 describe pod hello-world-app-55bf9c44b4-z824g ingress-nginx-admission-create-w2bgh ingress-nginx-admission-patch-5bssn: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-396564 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-396564 addons disable ingress-dns --alsologtostderr -v=1: (1.614203346s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-396564 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-396564 addons disable ingress --alsologtostderr -v=1: (7.726169619s)
--- FAIL: TestAddons/parallel/Ingress (156.95s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (329.56s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.976896ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-p7wrj" [3aec8457-6ee0-4eeb-9abe-871b30996d06] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004348811s
addons_test.go:402: (dbg) Run:  kubectl --context addons-396564 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-396564 top pods -n kube-system: exit status 1 (71.972607ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-xcvzc, age: 3m15.0664925s

                                                
                                                
** /stderr **
I1205 19:06:34.069222  538186 retry.go:31] will retry after 2.576401664s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-396564 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-396564 top pods -n kube-system: exit status 1 (70.250387ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-xcvzc, age: 3m17.714056031s

                                                
                                                
** /stderr **
I1205 19:06:36.716823  538186 retry.go:31] will retry after 6.634801341s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-396564 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-396564 top pods -n kube-system: exit status 1 (67.972035ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-xcvzc, age: 3m24.417297621s

                                                
                                                
** /stderr **
I1205 19:06:43.420451  538186 retry.go:31] will retry after 8.475624865s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-396564 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-396564 top pods -n kube-system: exit status 1 (74.56766ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-xcvzc, age: 3m32.968708769s

                                                
                                                
** /stderr **
I1205 19:06:51.971122  538186 retry.go:31] will retry after 7.917920535s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-396564 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-396564 top pods -n kube-system: exit status 1 (94.140724ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-xcvzc, age: 3m40.981358179s

                                                
                                                
** /stderr **
I1205 19:06:59.983793  538186 retry.go:31] will retry after 20.23671202s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-396564 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-396564 top pods -n kube-system: exit status 1 (68.78359ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-xcvzc, age: 4m1.288049073s

                                                
                                                
** /stderr **
I1205 19:07:20.290512  538186 retry.go:31] will retry after 13.668776946s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-396564 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-396564 top pods -n kube-system: exit status 1 (63.390997ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-xcvzc, age: 4m15.022443796s

                                                
                                                
** /stderr **
I1205 19:07:34.024855  538186 retry.go:31] will retry after 33.954847019s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-396564 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-396564 top pods -n kube-system: exit status 1 (65.957772ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-xcvzc, age: 4m49.043939769s

                                                
                                                
** /stderr **
I1205 19:08:08.046479  538186 retry.go:31] will retry after 1m14.268087695s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-396564 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-396564 top pods -n kube-system: exit status 1 (73.329094ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-xcvzc, age: 6m3.385732702s

                                                
                                                
** /stderr **
I1205 19:09:22.388372  538186 retry.go:31] will retry after 1m26.262466157s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-396564 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-396564 top pods -n kube-system: exit status 1 (70.384869ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-xcvzc, age: 7m29.720184771s

                                                
                                                
** /stderr **
I1205 19:10:48.722580  538186 retry.go:31] will retry after 1m6.041761397s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-396564 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-396564 top pods -n kube-system: exit status 1 (66.294283ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-xcvzc, age: 8m35.831023907s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-396564 -n addons-396564
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-396564 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-396564 logs -n 25: (1.266218544s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-196484                                                                     | download-only-196484 | jenkins | v1.34.0 | 05 Dec 24 19:02 UTC | 05 Dec 24 19:02 UTC |
	| delete  | -p download-only-765744                                                                     | download-only-765744 | jenkins | v1.34.0 | 05 Dec 24 19:02 UTC | 05 Dec 24 19:02 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-199569 | jenkins | v1.34.0 | 05 Dec 24 19:02 UTC |                     |
	|         | binary-mirror-199569                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46195                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-199569                                                                     | binary-mirror-199569 | jenkins | v1.34.0 | 05 Dec 24 19:02 UTC | 05 Dec 24 19:02 UTC |
	| addons  | enable dashboard -p                                                                         | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:02 UTC |                     |
	|         | addons-396564                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:02 UTC |                     |
	|         | addons-396564                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-396564 --wait=true                                                                | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:02 UTC | 05 Dec 24 19:05 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-396564 addons disable                                                                | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:05 UTC | 05 Dec 24 19:05 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-396564 addons disable                                                                | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | -p addons-396564                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-396564 addons                                                                        | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-396564 addons disable                                                                | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-396564 addons disable                                                                | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-396564 ip                                                                            | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	| addons  | addons-396564 addons disable                                                                | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-396564 addons                                                                        | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-396564 ssh cat                                                                       | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | /opt/local-path-provisioner/pvc-41b3db4e-7b14-4edb-9a67-ba393129c596_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-396564 addons disable                                                                | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:07 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-396564 addons                                                                        | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC | 05 Dec 24 19:06 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-396564 ssh curl -s                                                                   | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:06 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-396564 addons                                                                        | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:07 UTC | 05 Dec 24 19:07 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-396564 addons                                                                        | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:07 UTC | 05 Dec 24 19:07 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-396564 ip                                                                            | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:09 UTC | 05 Dec 24 19:09 UTC |
	| addons  | addons-396564 addons disable                                                                | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:09 UTC | 05 Dec 24 19:09 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-396564 addons disable                                                                | addons-396564        | jenkins | v1.34.0 | 05 Dec 24 19:09 UTC | 05 Dec 24 19:09 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 19:02:30
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:02:30.955385  538905 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:02:30.955633  538905 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:02:30.955641  538905 out.go:358] Setting ErrFile to fd 2...
	I1205 19:02:30.955645  538905 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:02:30.955806  538905 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 19:02:30.956507  538905 out.go:352] Setting JSON to false
	I1205 19:02:30.957508  538905 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":6297,"bootTime":1733419054,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:02:30.957618  538905 start.go:139] virtualization: kvm guest
	I1205 19:02:30.959863  538905 out.go:177] * [addons-396564] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:02:30.961345  538905 notify.go:220] Checking for updates...
	I1205 19:02:30.961366  538905 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 19:02:30.962956  538905 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:02:30.964562  538905 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:02:30.966034  538905 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:02:30.967498  538905 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:02:30.968985  538905 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:02:30.970711  538905 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 19:02:31.003531  538905 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 19:02:31.005105  538905 start.go:297] selected driver: kvm2
	I1205 19:02:31.005127  538905 start.go:901] validating driver "kvm2" against <nil>
	I1205 19:02:31.005146  538905 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:02:31.005915  538905 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:02:31.006031  538905 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20052-530897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 19:02:31.022290  538905 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 19:02:31.022354  538905 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 19:02:31.022614  538905 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:02:31.022646  538905 cni.go:84] Creating CNI manager for ""
	I1205 19:02:31.022689  538905 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 19:02:31.022699  538905 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 19:02:31.022757  538905 start.go:340] cluster config:
	{Name:addons-396564 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-396564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:02:31.022880  538905 iso.go:125] acquiring lock: {Name:mk778929df466edaca8cb6d38427acedfae32b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:02:31.024711  538905 out.go:177] * Starting "addons-396564" primary control-plane node in "addons-396564" cluster
	I1205 19:02:31.026081  538905 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:02:31.026118  538905 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 19:02:31.026130  538905 cache.go:56] Caching tarball of preloaded images
	I1205 19:02:31.026215  538905 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 19:02:31.026225  538905 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 19:02:31.026655  538905 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/config.json ...
	I1205 19:02:31.026695  538905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/config.json: {Name:mk077ee5da67ce1e15bac4e6e2cfc85d4920c391 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:02:31.026871  538905 start.go:360] acquireMachinesLock for addons-396564: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 19:02:31.026936  538905 start.go:364] duration metric: took 47.419µs to acquireMachinesLock for "addons-396564"
	I1205 19:02:31.026959  538905 start.go:93] Provisioning new machine with config: &{Name:addons-396564 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-396564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:02:31.027053  538905 start.go:125] createHost starting for "" (driver="kvm2")
	I1205 19:02:31.028890  538905 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1205 19:02:31.029049  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:02:31.029092  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:02:31.044420  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45495
	I1205 19:02:31.044946  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:02:31.045522  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:02:31.045547  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:02:31.045971  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:02:31.046148  538905 main.go:141] libmachine: (addons-396564) Calling .GetMachineName
	I1205 19:02:31.046326  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:02:31.046512  538905 start.go:159] libmachine.API.Create for "addons-396564" (driver="kvm2")
	I1205 19:02:31.046553  538905 client.go:168] LocalClient.Create starting
	I1205 19:02:31.046599  538905 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem
	I1205 19:02:31.280827  538905 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem
	I1205 19:02:31.354195  538905 main.go:141] libmachine: Running pre-create checks...
	I1205 19:02:31.354222  538905 main.go:141] libmachine: (addons-396564) Calling .PreCreateCheck
	I1205 19:02:31.354845  538905 main.go:141] libmachine: (addons-396564) Calling .GetConfigRaw
	I1205 19:02:31.355996  538905 main.go:141] libmachine: Creating machine...
	I1205 19:02:31.356037  538905 main.go:141] libmachine: (addons-396564) Calling .Create
	I1205 19:02:31.356960  538905 main.go:141] libmachine: (addons-396564) Creating KVM machine...
	I1205 19:02:31.358193  538905 main.go:141] libmachine: (addons-396564) DBG | found existing default KVM network
	I1205 19:02:31.359163  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:31.358996  538927 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123a50}
	I1205 19:02:31.359332  538905 main.go:141] libmachine: (addons-396564) DBG | created network xml: 
	I1205 19:02:31.359353  538905 main.go:141] libmachine: (addons-396564) DBG | <network>
	I1205 19:02:31.359364  538905 main.go:141] libmachine: (addons-396564) DBG |   <name>mk-addons-396564</name>
	I1205 19:02:31.359371  538905 main.go:141] libmachine: (addons-396564) DBG |   <dns enable='no'/>
	I1205 19:02:31.359383  538905 main.go:141] libmachine: (addons-396564) DBG |   
	I1205 19:02:31.359392  538905 main.go:141] libmachine: (addons-396564) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1205 19:02:31.359401  538905 main.go:141] libmachine: (addons-396564) DBG |     <dhcp>
	I1205 19:02:31.359409  538905 main.go:141] libmachine: (addons-396564) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1205 19:02:31.359414  538905 main.go:141] libmachine: (addons-396564) DBG |     </dhcp>
	I1205 19:02:31.359421  538905 main.go:141] libmachine: (addons-396564) DBG |   </ip>
	I1205 19:02:31.359426  538905 main.go:141] libmachine: (addons-396564) DBG |   
	I1205 19:02:31.359433  538905 main.go:141] libmachine: (addons-396564) DBG | </network>
	I1205 19:02:31.359496  538905 main.go:141] libmachine: (addons-396564) DBG | 
	I1205 19:02:31.365746  538905 main.go:141] libmachine: (addons-396564) DBG | trying to create private KVM network mk-addons-396564 192.168.39.0/24...
	I1205 19:02:31.431835  538905 main.go:141] libmachine: (addons-396564) Setting up store path in /home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564 ...
	I1205 19:02:31.431867  538905 main.go:141] libmachine: (addons-396564) DBG | private KVM network mk-addons-396564 192.168.39.0/24 created
	I1205 19:02:31.431883  538905 main.go:141] libmachine: (addons-396564) Building disk image from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 19:02:31.431908  538905 main.go:141] libmachine: (addons-396564) Downloading /home/jenkins/minikube-integration/20052-530897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 19:02:31.431965  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:31.431759  538927 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:02:31.729814  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:31.729662  538927 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa...
	I1205 19:02:31.803910  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:31.803745  538927 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/addons-396564.rawdisk...
	I1205 19:02:31.803943  538905 main.go:141] libmachine: (addons-396564) DBG | Writing magic tar header
	I1205 19:02:31.803958  538905 main.go:141] libmachine: (addons-396564) DBG | Writing SSH key tar header
	I1205 19:02:31.803977  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:31.803883  538927 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564 ...
	I1205 19:02:31.803990  538905 main.go:141] libmachine: (addons-396564) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564
	I1205 19:02:31.804069  538905 main.go:141] libmachine: (addons-396564) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564 (perms=drwx------)
	I1205 19:02:31.804097  538905 main.go:141] libmachine: (addons-396564) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines
	I1205 19:02:31.804106  538905 main.go:141] libmachine: (addons-396564) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines (perms=drwxr-xr-x)
	I1205 19:02:31.804118  538905 main.go:141] libmachine: (addons-396564) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube (perms=drwxr-xr-x)
	I1205 19:02:31.804124  538905 main.go:141] libmachine: (addons-396564) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897 (perms=drwxrwxr-x)
	I1205 19:02:31.804131  538905 main.go:141] libmachine: (addons-396564) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 19:02:31.804137  538905 main.go:141] libmachine: (addons-396564) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 19:02:31.804148  538905 main.go:141] libmachine: (addons-396564) Creating domain...
	I1205 19:02:31.804161  538905 main.go:141] libmachine: (addons-396564) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:02:31.804170  538905 main.go:141] libmachine: (addons-396564) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897
	I1205 19:02:31.804183  538905 main.go:141] libmachine: (addons-396564) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 19:02:31.804202  538905 main.go:141] libmachine: (addons-396564) DBG | Checking permissions on dir: /home/jenkins
	I1205 19:02:31.804211  538905 main.go:141] libmachine: (addons-396564) DBG | Checking permissions on dir: /home
	I1205 19:02:31.804219  538905 main.go:141] libmachine: (addons-396564) DBG | Skipping /home - not owner
	I1205 19:02:31.805271  538905 main.go:141] libmachine: (addons-396564) define libvirt domain using xml: 
	I1205 19:02:31.805294  538905 main.go:141] libmachine: (addons-396564) <domain type='kvm'>
	I1205 19:02:31.805304  538905 main.go:141] libmachine: (addons-396564)   <name>addons-396564</name>
	I1205 19:02:31.805312  538905 main.go:141] libmachine: (addons-396564)   <memory unit='MiB'>4000</memory>
	I1205 19:02:31.805333  538905 main.go:141] libmachine: (addons-396564)   <vcpu>2</vcpu>
	I1205 19:02:31.805343  538905 main.go:141] libmachine: (addons-396564)   <features>
	I1205 19:02:31.805352  538905 main.go:141] libmachine: (addons-396564)     <acpi/>
	I1205 19:02:31.805359  538905 main.go:141] libmachine: (addons-396564)     <apic/>
	I1205 19:02:31.805368  538905 main.go:141] libmachine: (addons-396564)     <pae/>
	I1205 19:02:31.805378  538905 main.go:141] libmachine: (addons-396564)     
	I1205 19:02:31.805386  538905 main.go:141] libmachine: (addons-396564)   </features>
	I1205 19:02:31.805395  538905 main.go:141] libmachine: (addons-396564)   <cpu mode='host-passthrough'>
	I1205 19:02:31.805401  538905 main.go:141] libmachine: (addons-396564)   
	I1205 19:02:31.805411  538905 main.go:141] libmachine: (addons-396564)   </cpu>
	I1205 19:02:31.805422  538905 main.go:141] libmachine: (addons-396564)   <os>
	I1205 19:02:31.805433  538905 main.go:141] libmachine: (addons-396564)     <type>hvm</type>
	I1205 19:02:31.805445  538905 main.go:141] libmachine: (addons-396564)     <boot dev='cdrom'/>
	I1205 19:02:31.805451  538905 main.go:141] libmachine: (addons-396564)     <boot dev='hd'/>
	I1205 19:02:31.805457  538905 main.go:141] libmachine: (addons-396564)     <bootmenu enable='no'/>
	I1205 19:02:31.805461  538905 main.go:141] libmachine: (addons-396564)   </os>
	I1205 19:02:31.805466  538905 main.go:141] libmachine: (addons-396564)   <devices>
	I1205 19:02:31.805472  538905 main.go:141] libmachine: (addons-396564)     <disk type='file' device='cdrom'>
	I1205 19:02:31.805482  538905 main.go:141] libmachine: (addons-396564)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/boot2docker.iso'/>
	I1205 19:02:31.805491  538905 main.go:141] libmachine: (addons-396564)       <target dev='hdc' bus='scsi'/>
	I1205 19:02:31.805496  538905 main.go:141] libmachine: (addons-396564)       <readonly/>
	I1205 19:02:31.805500  538905 main.go:141] libmachine: (addons-396564)     </disk>
	I1205 19:02:31.805537  538905 main.go:141] libmachine: (addons-396564)     <disk type='file' device='disk'>
	I1205 19:02:31.805565  538905 main.go:141] libmachine: (addons-396564)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 19:02:31.805587  538905 main.go:141] libmachine: (addons-396564)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/addons-396564.rawdisk'/>
	I1205 19:02:31.805599  538905 main.go:141] libmachine: (addons-396564)       <target dev='hda' bus='virtio'/>
	I1205 19:02:31.805608  538905 main.go:141] libmachine: (addons-396564)     </disk>
	I1205 19:02:31.805616  538905 main.go:141] libmachine: (addons-396564)     <interface type='network'>
	I1205 19:02:31.805627  538905 main.go:141] libmachine: (addons-396564)       <source network='mk-addons-396564'/>
	I1205 19:02:31.805639  538905 main.go:141] libmachine: (addons-396564)       <model type='virtio'/>
	I1205 19:02:31.805650  538905 main.go:141] libmachine: (addons-396564)     </interface>
	I1205 19:02:31.805661  538905 main.go:141] libmachine: (addons-396564)     <interface type='network'>
	I1205 19:02:31.805671  538905 main.go:141] libmachine: (addons-396564)       <source network='default'/>
	I1205 19:02:31.805685  538905 main.go:141] libmachine: (addons-396564)       <model type='virtio'/>
	I1205 19:02:31.805720  538905 main.go:141] libmachine: (addons-396564)     </interface>
	I1205 19:02:31.805750  538905 main.go:141] libmachine: (addons-396564)     <serial type='pty'>
	I1205 19:02:31.805766  538905 main.go:141] libmachine: (addons-396564)       <target port='0'/>
	I1205 19:02:31.805778  538905 main.go:141] libmachine: (addons-396564)     </serial>
	I1205 19:02:31.805793  538905 main.go:141] libmachine: (addons-396564)     <console type='pty'>
	I1205 19:02:31.805806  538905 main.go:141] libmachine: (addons-396564)       <target type='serial' port='0'/>
	I1205 19:02:31.805835  538905 main.go:141] libmachine: (addons-396564)     </console>
	I1205 19:02:31.805852  538905 main.go:141] libmachine: (addons-396564)     <rng model='virtio'>
	I1205 19:02:31.805863  538905 main.go:141] libmachine: (addons-396564)       <backend model='random'>/dev/random</backend>
	I1205 19:02:31.805883  538905 main.go:141] libmachine: (addons-396564)     </rng>
	I1205 19:02:31.805901  538905 main.go:141] libmachine: (addons-396564)     
	I1205 19:02:31.805920  538905 main.go:141] libmachine: (addons-396564)     
	I1205 19:02:31.805932  538905 main.go:141] libmachine: (addons-396564)   </devices>
	I1205 19:02:31.805939  538905 main.go:141] libmachine: (addons-396564) </domain>
	I1205 19:02:31.805956  538905 main.go:141] libmachine: (addons-396564) 
	I1205 19:02:31.813231  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:3e:46:6d in network default
	I1205 19:02:31.813848  538905 main.go:141] libmachine: (addons-396564) Ensuring networks are active...
	I1205 19:02:31.813871  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:31.814567  538905 main.go:141] libmachine: (addons-396564) Ensuring network default is active
	I1205 19:02:31.815030  538905 main.go:141] libmachine: (addons-396564) Ensuring network mk-addons-396564 is active
	I1205 19:02:31.816632  538905 main.go:141] libmachine: (addons-396564) Getting domain xml...
	I1205 19:02:31.817402  538905 main.go:141] libmachine: (addons-396564) Creating domain...
	I1205 19:02:33.253599  538905 main.go:141] libmachine: (addons-396564) Waiting to get IP...
	I1205 19:02:33.254373  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:33.254752  538905 main.go:141] libmachine: (addons-396564) DBG | unable to find current IP address of domain addons-396564 in network mk-addons-396564
	I1205 19:02:33.254789  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:33.254731  538927 retry.go:31] will retry after 280.930998ms: waiting for machine to come up
	I1205 19:02:33.537487  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:33.537910  538905 main.go:141] libmachine: (addons-396564) DBG | unable to find current IP address of domain addons-396564 in network mk-addons-396564
	I1205 19:02:33.537941  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:33.537868  538927 retry.go:31] will retry after 259.854298ms: waiting for machine to come up
	I1205 19:02:33.799485  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:33.799931  538905 main.go:141] libmachine: (addons-396564) DBG | unable to find current IP address of domain addons-396564 in network mk-addons-396564
	I1205 19:02:33.799959  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:33.799869  538927 retry.go:31] will retry after 398.375805ms: waiting for machine to come up
	I1205 19:02:34.199531  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:34.199933  538905 main.go:141] libmachine: (addons-396564) DBG | unable to find current IP address of domain addons-396564 in network mk-addons-396564
	I1205 19:02:34.199985  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:34.199918  538927 retry.go:31] will retry after 607.832689ms: waiting for machine to come up
	I1205 19:02:34.809790  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:34.810215  538905 main.go:141] libmachine: (addons-396564) DBG | unable to find current IP address of domain addons-396564 in network mk-addons-396564
	I1205 19:02:34.810239  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:34.810180  538927 retry.go:31] will retry after 562.585715ms: waiting for machine to come up
	I1205 19:02:35.374055  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:35.374564  538905 main.go:141] libmachine: (addons-396564) DBG | unable to find current IP address of domain addons-396564 in network mk-addons-396564
	I1205 19:02:35.374592  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:35.374507  538927 retry.go:31] will retry after 628.854692ms: waiting for machine to come up
	I1205 19:02:36.005446  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:36.005860  538905 main.go:141] libmachine: (addons-396564) DBG | unable to find current IP address of domain addons-396564 in network mk-addons-396564
	I1205 19:02:36.005893  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:36.005814  538927 retry.go:31] will retry after 1.039428653s: waiting for machine to come up
	I1205 19:02:37.046770  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:37.047259  538905 main.go:141] libmachine: (addons-396564) DBG | unable to find current IP address of domain addons-396564 in network mk-addons-396564
	I1205 19:02:37.047290  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:37.047215  538927 retry.go:31] will retry after 971.053342ms: waiting for machine to come up
	I1205 19:02:38.019641  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:38.020069  538905 main.go:141] libmachine: (addons-396564) DBG | unable to find current IP address of domain addons-396564 in network mk-addons-396564
	I1205 19:02:38.020093  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:38.020010  538927 retry.go:31] will retry after 1.410662317s: waiting for machine to come up
	I1205 19:02:39.432627  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:39.433098  538905 main.go:141] libmachine: (addons-396564) DBG | unable to find current IP address of domain addons-396564 in network mk-addons-396564
	I1205 19:02:39.433123  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:39.433042  538927 retry.go:31] will retry after 1.497979927s: waiting for machine to come up
	I1205 19:02:40.933032  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:40.933435  538905 main.go:141] libmachine: (addons-396564) DBG | unable to find current IP address of domain addons-396564 in network mk-addons-396564
	I1205 19:02:40.933481  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:40.933426  538927 retry.go:31] will retry after 2.733921879s: waiting for machine to come up
	I1205 19:02:43.669442  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:43.669835  538905 main.go:141] libmachine: (addons-396564) DBG | unable to find current IP address of domain addons-396564 in network mk-addons-396564
	I1205 19:02:43.669869  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:43.669776  538927 retry.go:31] will retry after 3.113935772s: waiting for machine to come up
	I1205 19:02:46.785658  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:46.786068  538905 main.go:141] libmachine: (addons-396564) DBG | unable to find current IP address of domain addons-396564 in network mk-addons-396564
	I1205 19:02:46.786112  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:46.785992  538927 retry.go:31] will retry after 3.769972558s: waiting for machine to come up
	I1205 19:02:50.559967  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:50.560354  538905 main.go:141] libmachine: (addons-396564) DBG | unable to find current IP address of domain addons-396564 in network mk-addons-396564
	I1205 19:02:50.560379  538905 main.go:141] libmachine: (addons-396564) DBG | I1205 19:02:50.560306  538927 retry.go:31] will retry after 3.65413274s: waiting for machine to come up
	I1205 19:02:54.217489  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.217869  538905 main.go:141] libmachine: (addons-396564) Found IP for machine: 192.168.39.9
	I1205 19:02:54.217902  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has current primary IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.217911  538905 main.go:141] libmachine: (addons-396564) Reserving static IP address...
	I1205 19:02:54.218238  538905 main.go:141] libmachine: (addons-396564) DBG | unable to find host DHCP lease matching {name: "addons-396564", mac: "52:54:00:86:dd:b4", ip: "192.168.39.9"} in network mk-addons-396564
	I1205 19:02:54.293917  538905 main.go:141] libmachine: (addons-396564) Reserved static IP address: 192.168.39.9
	I1205 19:02:54.293953  538905 main.go:141] libmachine: (addons-396564) DBG | Getting to WaitForSSH function...
	I1205 19:02:54.293961  538905 main.go:141] libmachine: (addons-396564) Waiting for SSH to be available...
	I1205 19:02:54.296405  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.296797  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:minikube Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:54.296834  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.297027  538905 main.go:141] libmachine: (addons-396564) DBG | Using SSH client type: external
	I1205 19:02:54.297051  538905 main.go:141] libmachine: (addons-396564) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa (-rw-------)
	I1205 19:02:54.297091  538905 main.go:141] libmachine: (addons-396564) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.9 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 19:02:54.297110  538905 main.go:141] libmachine: (addons-396564) DBG | About to run SSH command:
	I1205 19:02:54.297142  538905 main.go:141] libmachine: (addons-396564) DBG | exit 0
	I1205 19:02:54.428905  538905 main.go:141] libmachine: (addons-396564) DBG | SSH cmd err, output: <nil>: 
	I1205 19:02:54.429133  538905 main.go:141] libmachine: (addons-396564) KVM machine creation complete!
	I1205 19:02:54.429457  538905 main.go:141] libmachine: (addons-396564) Calling .GetConfigRaw
	I1205 19:02:54.430070  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:02:54.430276  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:02:54.430554  538905 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 19:02:54.430578  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:02:54.432004  538905 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 19:02:54.432024  538905 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 19:02:54.432031  538905 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 19:02:54.432037  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:02:54.434508  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.435033  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:54.435058  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.435295  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:02:54.435508  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:54.435790  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:54.435987  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:02:54.436181  538905 main.go:141] libmachine: Using SSH client type: native
	I1205 19:02:54.436496  538905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I1205 19:02:54.436513  538905 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 19:02:54.543902  538905 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:02:54.543938  538905 main.go:141] libmachine: Detecting the provisioner...
	I1205 19:02:54.543946  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:02:54.546761  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.547167  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:54.547205  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.547392  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:02:54.547604  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:54.547804  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:54.547927  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:02:54.548074  538905 main.go:141] libmachine: Using SSH client type: native
	I1205 19:02:54.548262  538905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I1205 19:02:54.548302  538905 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 19:02:54.662064  538905 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 19:02:54.662182  538905 main.go:141] libmachine: found compatible host: buildroot
	I1205 19:02:54.662195  538905 main.go:141] libmachine: Provisioning with buildroot...
	I1205 19:02:54.662204  538905 main.go:141] libmachine: (addons-396564) Calling .GetMachineName
	I1205 19:02:54.662497  538905 buildroot.go:166] provisioning hostname "addons-396564"
	I1205 19:02:54.662550  538905 main.go:141] libmachine: (addons-396564) Calling .GetMachineName
	I1205 19:02:54.662771  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:02:54.665508  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.665898  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:54.665930  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.666130  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:02:54.666322  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:54.666519  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:54.666697  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:02:54.666861  538905 main.go:141] libmachine: Using SSH client type: native
	I1205 19:02:54.667060  538905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I1205 19:02:54.667074  538905 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-396564 && echo "addons-396564" | sudo tee /etc/hostname
	I1205 19:02:54.794326  538905 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-396564
	
	I1205 19:02:54.794384  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:02:54.797379  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.797716  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:54.797744  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.797932  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:02:54.798140  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:54.798305  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:54.798449  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:02:54.798732  538905 main.go:141] libmachine: Using SSH client type: native
	I1205 19:02:54.798923  538905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I1205 19:02:54.798940  538905 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-396564' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-396564/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-396564' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:02:54.918401  538905 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:02:54.918433  538905 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 19:02:54.918459  538905 buildroot.go:174] setting up certificates
	I1205 19:02:54.918475  538905 provision.go:84] configureAuth start
	I1205 19:02:54.918484  538905 main.go:141] libmachine: (addons-396564) Calling .GetMachineName
	I1205 19:02:54.918771  538905 main.go:141] libmachine: (addons-396564) Calling .GetIP
	I1205 19:02:54.921280  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.921639  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:54.921668  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.921844  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:02:54.923686  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.924008  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:54.924040  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:54.924151  538905 provision.go:143] copyHostCerts
	I1205 19:02:54.924219  538905 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 19:02:54.924377  538905 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 19:02:54.924443  538905 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 19:02:54.924492  538905 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.addons-396564 san=[127.0.0.1 192.168.39.9 addons-396564 localhost minikube]
	I1205 19:02:55.073548  538905 provision.go:177] copyRemoteCerts
	I1205 19:02:55.073614  538905 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:02:55.073642  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:02:55.077543  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.078029  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:55.078053  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.078328  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:02:55.078560  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:55.078799  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:02:55.079011  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:02:55.163532  538905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 19:02:55.188145  538905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 19:02:55.212749  538905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 19:02:55.237949  538905 provision.go:87] duration metric: took 319.45828ms to configureAuth
	I1205 19:02:55.237984  538905 buildroot.go:189] setting minikube options for container-runtime
	I1205 19:02:55.238194  538905 config.go:182] Loaded profile config "addons-396564": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:02:55.238286  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:02:55.241223  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.241551  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:55.241577  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.241750  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:02:55.241974  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:55.242157  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:55.242359  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:02:55.242562  538905 main.go:141] libmachine: Using SSH client type: native
	I1205 19:02:55.242743  538905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I1205 19:02:55.242757  538905 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:02:55.500862  538905 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:02:55.500895  538905 main.go:141] libmachine: Checking connection to Docker...
	I1205 19:02:55.500905  538905 main.go:141] libmachine: (addons-396564) Calling .GetURL
	I1205 19:02:55.502418  538905 main.go:141] libmachine: (addons-396564) DBG | Using libvirt version 6000000
	I1205 19:02:55.504941  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.505303  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:55.505334  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.505479  538905 main.go:141] libmachine: Docker is up and running!
	I1205 19:02:55.505494  538905 main.go:141] libmachine: Reticulating splines...
	I1205 19:02:55.505503  538905 client.go:171] duration metric: took 24.458941374s to LocalClient.Create
	I1205 19:02:55.505536  538905 start.go:167] duration metric: took 24.459024763s to libmachine.API.Create "addons-396564"
	I1205 19:02:55.505551  538905 start.go:293] postStartSetup for "addons-396564" (driver="kvm2")
	I1205 19:02:55.505567  538905 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:02:55.505593  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:02:55.505888  538905 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:02:55.505917  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:02:55.508001  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.508342  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:55.508371  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.508538  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:02:55.508702  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:55.508853  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:02:55.508981  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:02:55.597167  538905 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:02:55.601604  538905 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 19:02:55.601634  538905 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 19:02:55.601725  538905 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 19:02:55.601759  538905 start.go:296] duration metric: took 96.19822ms for postStartSetup
	I1205 19:02:55.601809  538905 main.go:141] libmachine: (addons-396564) Calling .GetConfigRaw
	I1205 19:02:55.602468  538905 main.go:141] libmachine: (addons-396564) Calling .GetIP
	I1205 19:02:55.605049  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.605335  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:55.605366  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.605585  538905 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/config.json ...
	I1205 19:02:55.605819  538905 start.go:128] duration metric: took 24.578752053s to createHost
	I1205 19:02:55.605850  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:02:55.607839  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.608221  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:55.608251  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.608409  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:02:55.608602  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:55.608758  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:55.608918  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:02:55.609065  538905 main.go:141] libmachine: Using SSH client type: native
	I1205 19:02:55.609251  538905 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I1205 19:02:55.609264  538905 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 19:02:55.717362  538905 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733425375.688966764
	
	I1205 19:02:55.717393  538905 fix.go:216] guest clock: 1733425375.688966764
	I1205 19:02:55.717401  538905 fix.go:229] Guest: 2024-12-05 19:02:55.688966764 +0000 UTC Remote: 2024-12-05 19:02:55.605834524 +0000 UTC m=+24.690421001 (delta=83.13224ms)
	I1205 19:02:55.717423  538905 fix.go:200] guest clock delta is within tolerance: 83.13224ms
	I1205 19:02:55.717429  538905 start.go:83] releasing machines lock for "addons-396564", held for 24.690480333s
	I1205 19:02:55.717451  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:02:55.717719  538905 main.go:141] libmachine: (addons-396564) Calling .GetIP
	I1205 19:02:55.720452  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.720802  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:55.720835  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.720954  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:02:55.721533  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:02:55.721724  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:02:55.721838  538905 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:02:55.721909  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:02:55.721941  538905 ssh_runner.go:195] Run: cat /version.json
	I1205 19:02:55.721967  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:02:55.724709  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.724870  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.725081  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:55.725107  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.725288  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:02:55.725405  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:55.725436  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:55.725482  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:55.725670  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:02:55.725678  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:02:55.725861  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:02:55.725855  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:02:55.725978  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:02:55.726138  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:02:55.828445  538905 ssh_runner.go:195] Run: systemctl --version
	I1205 19:02:55.834558  538905 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:02:55.997650  538905 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 19:02:56.004916  538905 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 19:02:56.005041  538905 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:02:56.022101  538905 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 19:02:56.022136  538905 start.go:495] detecting cgroup driver to use...
	I1205 19:02:56.022227  538905 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:02:56.038382  538905 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:02:56.053166  538905 docker.go:217] disabling cri-docker service (if available) ...
	I1205 19:02:56.053236  538905 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:02:56.067658  538905 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:02:56.082380  538905 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:02:56.203743  538905 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:02:56.359486  538905 docker.go:233] disabling docker service ...
	I1205 19:02:56.359581  538905 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:02:56.374940  538905 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:02:56.388245  538905 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:02:56.528365  538905 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:02:56.652910  538905 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:02:56.668303  538905 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:02:56.687811  538905 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 19:02:56.687876  538905 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:02:56.699758  538905 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:02:56.699828  538905 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:02:56.710994  538905 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:02:56.721827  538905 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:02:56.732840  538905 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:02:56.744109  538905 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:02:56.755349  538905 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:02:56.775027  538905 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:02:56.786975  538905 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:02:56.796769  538905 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 19:02:56.796862  538905 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 19:02:56.810530  538905 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:02:56.820415  538905 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:02:56.939260  538905 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:02:57.030837  538905 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:02:57.030939  538905 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:02:57.036149  538905 start.go:563] Will wait 60s for crictl version
	I1205 19:02:57.036240  538905 ssh_runner.go:195] Run: which crictl
	I1205 19:02:57.040118  538905 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:02:57.083305  538905 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 19:02:57.083428  538905 ssh_runner.go:195] Run: crio --version
	I1205 19:02:57.111637  538905 ssh_runner.go:195] Run: crio --version
	I1205 19:02:57.142930  538905 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 19:02:57.144340  538905 main.go:141] libmachine: (addons-396564) Calling .GetIP
	I1205 19:02:57.146939  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:57.147349  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:02:57.147438  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:02:57.147611  538905 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 19:02:57.152052  538905 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:02:57.165788  538905 kubeadm.go:883] updating cluster {Name:addons-396564 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-396564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 19:02:57.165921  538905 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:02:57.165990  538905 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:02:57.201069  538905 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 19:02:57.201161  538905 ssh_runner.go:195] Run: which lz4
	I1205 19:02:57.205635  538905 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 19:02:57.209913  538905 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 19:02:57.209957  538905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 19:02:58.602414  538905 crio.go:462] duration metric: took 1.396808897s to copy over tarball
	I1205 19:02:58.602508  538905 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 19:03:00.818046  538905 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.215473042s)
	I1205 19:03:00.818088  538905 crio.go:469] duration metric: took 2.215639844s to extract the tarball
	I1205 19:03:00.818099  538905 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 19:03:00.858572  538905 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:03:00.902879  538905 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 19:03:00.902911  538905 cache_images.go:84] Images are preloaded, skipping loading
	I1205 19:03:00.902925  538905 kubeadm.go:934] updating node { 192.168.39.9 8443 v1.31.2 crio true true} ...
	I1205 19:03:00.903084  538905 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-396564 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.9
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-396564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 19:03:00.903176  538905 ssh_runner.go:195] Run: crio config
	I1205 19:03:00.951344  538905 cni.go:84] Creating CNI manager for ""
	I1205 19:03:00.951372  538905 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 19:03:00.951384  538905 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 19:03:00.951406  538905 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.9 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-396564 NodeName:addons-396564 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.9"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.9 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 19:03:00.951548  538905 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.9
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-396564"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.9"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.9"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 19:03:00.951615  538905 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 19:03:00.963052  538905 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 19:03:00.963138  538905 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 19:03:00.972888  538905 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1205 19:03:00.989532  538905 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 19:03:01.006408  538905 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1205 19:03:01.024244  538905 ssh_runner.go:195] Run: grep 192.168.39.9	control-plane.minikube.internal$ /etc/hosts
	I1205 19:03:01.028317  538905 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.9	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:03:01.041736  538905 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:03:01.174098  538905 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:03:01.193014  538905 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564 for IP: 192.168.39.9
	I1205 19:03:01.193052  538905 certs.go:194] generating shared ca certs ...
	I1205 19:03:01.193080  538905 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:01.193289  538905 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 19:03:01.364949  538905 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt ...
	I1205 19:03:01.364986  538905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt: {Name:mkb0906d0eefc726a3bca7b5f1107c861696fa8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:01.365196  538905 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key ...
	I1205 19:03:01.365211  538905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key: {Name:mke5b97d4ab29c4390ef0b2f6566024d0db0ba91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:01.365318  538905 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 19:03:01.441933  538905 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt ...
	I1205 19:03:01.441972  538905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt: {Name:mk070fdb3f8a5db8d4547993257f562b7c79c1eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:01.442289  538905 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key ...
	I1205 19:03:01.442318  538905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key: {Name:mk8e32bf5e6761b3c50f4c9ba28815b32a22d987 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:01.442450  538905 certs.go:256] generating profile certs ...
	I1205 19:03:01.442517  538905 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.key
	I1205 19:03:01.442531  538905 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt with IP's: []
	I1205 19:03:01.651902  538905 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt ...
	I1205 19:03:01.651935  538905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: {Name:mk74617608404eaed6e3664672f5e26e12276e2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:01.652140  538905 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.key ...
	I1205 19:03:01.652158  538905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.key: {Name:mk51ac90223272f0a3070964a273b469b652346b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:01.652259  538905 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/apiserver.key.41add270
	I1205 19:03:01.652301  538905 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/apiserver.crt.41add270 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.9]
	I1205 19:03:01.803934  538905 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/apiserver.crt.41add270 ...
	I1205 19:03:01.803973  538905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/apiserver.crt.41add270: {Name:mk6f589e7c8dc32d5df66e511d67e9243b1d03b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:01.804160  538905 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/apiserver.key.41add270 ...
	I1205 19:03:01.804176  538905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/apiserver.key.41add270: {Name:mk9b1a71ff621c1f4832b4f504830ce477d5bf61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:01.804252  538905 certs.go:381] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/apiserver.crt.41add270 -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/apiserver.crt
	I1205 19:03:01.804409  538905 certs.go:385] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/apiserver.key.41add270 -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/apiserver.key
	I1205 19:03:01.804472  538905 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/proxy-client.key
	I1205 19:03:01.804493  538905 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/proxy-client.crt with IP's: []
	I1205 19:03:02.093089  538905 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/proxy-client.crt ...
	I1205 19:03:02.093129  538905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/proxy-client.crt: {Name:mk155e517c3bafdd635249d9a1d9c2ae1f557583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:02.093348  538905 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/proxy-client.key ...
	I1205 19:03:02.093366  538905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/proxy-client.key: {Name:mk1fca4c3033a8c71405b1d07ddd033cb4264799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:02.093604  538905 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 19:03:02.093651  538905 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 19:03:02.093688  538905 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:03:02.093719  538905 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 19:03:02.094398  538905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:03:02.122997  538905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 19:03:02.150079  538905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:03:02.177181  538905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 19:03:02.203403  538905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1205 19:03:02.229119  538905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 19:03:02.255473  538905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:03:02.281879  538905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 19:03:02.307258  538905 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:03:02.331505  538905 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 19:03:02.348854  538905 ssh_runner.go:195] Run: openssl version
	I1205 19:03:02.355122  538905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:03:02.366546  538905 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:03:02.371241  538905 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:03:02.371318  538905 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:03:02.377265  538905 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:03:02.388485  538905 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 19:03:02.392800  538905 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 19:03:02.392857  538905 kubeadm.go:392] StartCluster: {Name:addons-396564 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-396564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:03:02.392937  538905 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 19:03:02.392981  538905 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 19:03:02.434718  538905 cri.go:89] found id: ""
	I1205 19:03:02.434817  538905 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 19:03:02.445418  538905 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 19:03:02.455988  538905 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 19:03:02.466887  538905 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 19:03:02.466914  538905 kubeadm.go:157] found existing configuration files:
	
	I1205 19:03:02.466974  538905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 19:03:02.476641  538905 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 19:03:02.476712  538905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 19:03:02.487114  538905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 19:03:02.497174  538905 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 19:03:02.497265  538905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 19:03:02.507628  538905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 19:03:02.517718  538905 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 19:03:02.517777  538905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 19:03:02.528386  538905 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 19:03:02.538820  538905 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 19:03:02.538963  538905 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 19:03:02.549889  538905 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 19:03:02.748637  538905 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 19:03:12.860146  538905 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 19:03:12.860236  538905 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 19:03:12.860351  538905 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 19:03:12.860515  538905 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 19:03:12.860620  538905 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 19:03:12.860684  538905 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 19:03:12.862291  538905 out.go:235]   - Generating certificates and keys ...
	I1205 19:03:12.862388  538905 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 19:03:12.862462  538905 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 19:03:12.862563  538905 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 19:03:12.862642  538905 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1205 19:03:12.862712  538905 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1205 19:03:12.862757  538905 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1205 19:03:12.862807  538905 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1205 19:03:12.862912  538905 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-396564 localhost] and IPs [192.168.39.9 127.0.0.1 ::1]
	I1205 19:03:12.862963  538905 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1205 19:03:12.863075  538905 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-396564 localhost] and IPs [192.168.39.9 127.0.0.1 ::1]
	I1205 19:03:12.863178  538905 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 19:03:12.863291  538905 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 19:03:12.863357  538905 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1205 19:03:12.863440  538905 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 19:03:12.863526  538905 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 19:03:12.863609  538905 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 19:03:12.863691  538905 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 19:03:12.863787  538905 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 19:03:12.863869  538905 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 19:03:12.863980  538905 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 19:03:12.864072  538905 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 19:03:12.865604  538905 out.go:235]   - Booting up control plane ...
	I1205 19:03:12.865701  538905 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 19:03:12.865773  538905 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 19:03:12.865834  538905 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 19:03:12.865929  538905 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 19:03:12.866009  538905 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 19:03:12.866044  538905 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 19:03:12.866150  538905 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 19:03:12.866296  538905 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 19:03:12.866359  538905 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.396416ms
	I1205 19:03:12.866424  538905 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 19:03:12.866473  538905 kubeadm.go:310] [api-check] The API server is healthy after 5.001698189s
	I1205 19:03:12.866593  538905 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 19:03:12.866714  538905 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 19:03:12.866784  538905 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 19:03:12.867017  538905 kubeadm.go:310] [mark-control-plane] Marking the node addons-396564 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 19:03:12.867084  538905 kubeadm.go:310] [bootstrap-token] Using token: xx61i1.j99ndvasf8gy30az
	I1205 19:03:12.869309  538905 out.go:235]   - Configuring RBAC rules ...
	I1205 19:03:12.869421  538905 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 19:03:12.869519  538905 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 19:03:12.869729  538905 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 19:03:12.869892  538905 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 19:03:12.870045  538905 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 19:03:12.870117  538905 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 19:03:12.870231  538905 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 19:03:12.870277  538905 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 19:03:12.870323  538905 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 19:03:12.870329  538905 kubeadm.go:310] 
	I1205 19:03:12.870382  538905 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 19:03:12.870388  538905 kubeadm.go:310] 
	I1205 19:03:12.870482  538905 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 19:03:12.870491  538905 kubeadm.go:310] 
	I1205 19:03:12.870521  538905 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 19:03:12.870590  538905 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 19:03:12.870666  538905 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 19:03:12.870686  538905 kubeadm.go:310] 
	I1205 19:03:12.870762  538905 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 19:03:12.870768  538905 kubeadm.go:310] 
	I1205 19:03:12.870811  538905 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 19:03:12.870820  538905 kubeadm.go:310] 
	I1205 19:03:12.870863  538905 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 19:03:12.870943  538905 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 19:03:12.871007  538905 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 19:03:12.871013  538905 kubeadm.go:310] 
	I1205 19:03:12.871091  538905 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 19:03:12.871164  538905 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 19:03:12.871170  538905 kubeadm.go:310] 
	I1205 19:03:12.871242  538905 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xx61i1.j99ndvasf8gy30az \
	I1205 19:03:12.871336  538905 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 \
	I1205 19:03:12.871356  538905 kubeadm.go:310] 	--control-plane 
	I1205 19:03:12.871360  538905 kubeadm.go:310] 
	I1205 19:03:12.871440  538905 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 19:03:12.871450  538905 kubeadm.go:310] 
	I1205 19:03:12.871523  538905 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xx61i1.j99ndvasf8gy30az \
	I1205 19:03:12.871629  538905 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 
	I1205 19:03:12.871641  538905 cni.go:84] Creating CNI manager for ""
	I1205 19:03:12.871647  538905 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 19:03:12.873168  538905 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 19:03:12.874496  538905 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 19:03:12.888945  538905 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 19:03:12.913413  538905 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 19:03:12.913498  538905 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:03:12.913498  538905 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-396564 minikube.k8s.io/updated_at=2024_12_05T19_03_12_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=addons-396564 minikube.k8s.io/primary=true
	I1205 19:03:13.038908  538905 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:03:13.080346  538905 ops.go:34] apiserver oom_adj: -16
	I1205 19:03:13.539104  538905 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:03:14.039871  538905 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:03:14.539935  538905 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:03:15.039037  538905 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:03:15.539087  538905 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:03:16.039563  538905 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:03:16.539270  538905 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:03:17.039924  538905 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:03:17.137675  538905 kubeadm.go:1113] duration metric: took 4.224246396s to wait for elevateKubeSystemPrivileges
	I1205 19:03:17.137722  538905 kubeadm.go:394] duration metric: took 14.744870852s to StartCluster
	I1205 19:03:17.137748  538905 settings.go:142] acquiring lock: {Name:mk53b9e6d652790a330d8f10370186624dd74692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:17.137923  538905 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:03:17.138342  538905 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:03:17.138591  538905 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 19:03:17.138609  538905 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:03:17.138682  538905 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1205 19:03:17.138828  538905 addons.go:69] Setting yakd=true in profile "addons-396564"
	I1205 19:03:17.138842  538905 addons.go:69] Setting inspektor-gadget=true in profile "addons-396564"
	I1205 19:03:17.138863  538905 addons.go:69] Setting volumesnapshots=true in profile "addons-396564"
	I1205 19:03:17.138870  538905 addons.go:234] Setting addon inspektor-gadget=true in "addons-396564"
	I1205 19:03:17.138878  538905 addons.go:234] Setting addon volumesnapshots=true in "addons-396564"
	I1205 19:03:17.138878  538905 addons.go:69] Setting metrics-server=true in profile "addons-396564"
	I1205 19:03:17.138879  538905 addons.go:69] Setting volcano=true in profile "addons-396564"
	I1205 19:03:17.138900  538905 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-396564"
	I1205 19:03:17.138909  538905 addons.go:234] Setting addon volcano=true in "addons-396564"
	I1205 19:03:17.138905  538905 config.go:182] Loaded profile config "addons-396564": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:03:17.138918  538905 addons.go:69] Setting registry=true in profile "addons-396564"
	I1205 19:03:17.138919  538905 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-396564"
	I1205 19:03:17.138923  538905 addons.go:69] Setting gcp-auth=true in profile "addons-396564"
	I1205 19:03:17.138929  538905 addons.go:234] Setting addon registry=true in "addons-396564"
	I1205 19:03:17.138931  538905 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-396564"
	I1205 19:03:17.138937  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.138941  538905 mustload.go:65] Loading cluster: addons-396564
	I1205 19:03:17.138949  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.138860  538905 addons.go:69] Setting storage-provisioner=true in profile "addons-396564"
	I1205 19:03:17.138955  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.138965  538905 addons.go:69] Setting ingress=true in profile "addons-396564"
	I1205 19:03:17.138968  538905 addons.go:234] Setting addon storage-provisioner=true in "addons-396564"
	I1205 19:03:17.138977  538905 addons.go:234] Setting addon ingress=true in "addons-396564"
	I1205 19:03:17.138991  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.139015  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.138855  538905 addons.go:234] Setting addon yakd=true in "addons-396564"
	I1205 19:03:17.139115  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.139122  538905 config.go:182] Loaded profile config "addons-396564": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:03:17.138910  538905 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-396564"
	I1205 19:03:17.139280  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.139402  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.138892  538905 addons.go:234] Setting addon metrics-server=true in "addons-396564"
	I1205 19:03:17.139438  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.139438  538905 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-396564"
	I1205 19:03:17.139440  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.139449  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.139472  538905 addons.go:69] Setting default-storageclass=true in profile "addons-396564"
	I1205 19:03:17.139477  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.139490  538905 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-396564"
	I1205 19:03:17.139491  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.139496  538905 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-396564"
	I1205 19:03:17.138910  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.139417  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.139516  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.139526  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.139536  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.139546  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.139568  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.139580  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.139612  538905 addons.go:69] Setting cloud-spanner=true in profile "addons-396564"
	I1205 19:03:17.139623  538905 addons.go:234] Setting addon cloud-spanner=true in "addons-396564"
	I1205 19:03:17.138912  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.139631  538905 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-396564"
	I1205 19:03:17.139646  538905 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-396564"
	I1205 19:03:17.138944  538905 addons.go:69] Setting ingress-dns=true in profile "addons-396564"
	I1205 19:03:17.139734  538905 addons.go:234] Setting addon ingress-dns=true in "addons-396564"
	I1205 19:03:17.139797  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.139834  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.139916  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.139946  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.139964  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.139969  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.139999  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.140005  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.140007  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.139921  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.140152  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.139933  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.139925  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.140441  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.140635  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.140668  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.139918  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.141601  538905 out.go:177] * Verifying Kubernetes components...
	I1205 19:03:17.143429  538905 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:03:17.152666  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.152731  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.153121  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.153163  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.158728  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.158794  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.162903  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41515
	I1205 19:03:17.167176  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45151
	I1205 19:03:17.167941  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.168683  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.168709  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.168795  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.169319  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.169950  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.170000  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.170747  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.170778  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.171266  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.171939  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.171983  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.174673  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36641
	I1205 19:03:17.175336  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.175996  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.176026  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.176464  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.177275  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36839
	I1205 19:03:17.177654  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.177880  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46871
	I1205 19:03:17.178165  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.178180  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.178609  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.179261  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.179320  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.179936  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.180864  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.180882  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.180932  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.181006  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.181641  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.182313  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.182357  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.190692  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32927
	I1205 19:03:17.200001  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.200850  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.200877  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.201419  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.201758  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33749
	I1205 19:03:17.202109  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.202171  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.202191  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40869
	I1205 19:03:17.202745  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.202884  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.203329  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.203351  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.203530  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.203546  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.204027  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.204637  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.204689  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.205513  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42447
	I1205 19:03:17.206109  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.206758  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.206776  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.207195  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.207794  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.207836  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.208032  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36735
	I1205 19:03:17.208519  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.208805  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36107
	I1205 19:03:17.209058  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.209073  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.209452  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.209924  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.209944  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.210325  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.210699  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.210758  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.214611  538905 addons.go:234] Setting addon default-storageclass=true in "addons-396564"
	I1205 19:03:17.214660  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.215034  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.215076  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.215360  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.215812  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.215851  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.216471  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.216504  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.218736  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41333
	I1205 19:03:17.219325  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.219964  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.219983  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.220414  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.220596  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.223748  538905 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-396564"
	I1205 19:03:17.223799  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.224163  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.224207  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.226251  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33919
	I1205 19:03:17.228216  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.228856  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.228886  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.229072  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36639
	I1205 19:03:17.229308  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.229456  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.229651  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.230237  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.230256  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.230558  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37935
	I1205 19:03:17.230755  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.231379  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.231425  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.231669  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.232250  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.232280  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.232690  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.232866  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.234739  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.236688  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33949
	I1205 19:03:17.237159  538905 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1205 19:03:17.237565  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.238368  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40781
	I1205 19:03:17.238605  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.238629  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.238745  538905 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 19:03:17.238762  538905 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 19:03:17.238794  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:17.240555  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41223
	I1205 19:03:17.240754  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:17.241171  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.241219  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.241747  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.242370  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.242390  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.242458  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33275
	I1205 19:03:17.242818  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.242900  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38737
	I1205 19:03:17.243226  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.243264  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.244048  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:17.244081  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.244398  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:17.244614  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:17.244807  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:17.244883  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37261
	I1205 19:03:17.245298  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:17.245422  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34543
	I1205 19:03:17.245866  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.246637  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.246656  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.247197  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.247697  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.248391  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.248629  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.250408  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43521
	I1205 19:03:17.250581  538905 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1205 19:03:17.250833  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.252639  538905 out.go:177]   - Using image docker.io/registry:2.8.3
	I1205 19:03:17.252965  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.253030  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.253384  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.253973  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.254000  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.254070  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43293
	I1205 19:03:17.254185  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.254357  538905 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1205 19:03:17.254380  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1205 19:03:17.254400  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:17.254472  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.254594  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.254731  538905 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 19:03:17.256978  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42769
	I1205 19:03:17.257067  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.257084  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.257179  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.257220  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.257236  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.257264  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.257307  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.257311  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35447
	I1205 19:03:17.257427  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.257438  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.257482  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43135
	I1205 19:03:17.258243  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.258337  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.258345  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.258358  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.258365  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.258414  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.258442  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.258606  538905 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:03:17.258622  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 19:03:17.258641  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:17.258652  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.258713  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.259026  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.259051  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.259028  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.259106  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.259187  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.259199  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.259475  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.259573  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.260156  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.260201  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.260295  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.260728  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.260848  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.260956  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.261268  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.261579  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.261596  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.262196  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.262217  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.262234  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.262549  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.263049  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.263081  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:17.263098  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.263129  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.263244  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.263293  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.263579  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.263640  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.263973  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:17.264019  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.264045  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:17.264337  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:17.264611  538905 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1205 19:03:17.264623  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:17.264650  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:17.264615  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:17.264671  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:17.264684  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:17.264691  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:17.265710  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:17.265958  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:17.266069  538905 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1205 19:03:17.266211  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:17.266256  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.266276  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:17.266476  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	W1205 19:03:17.266559  538905 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1205 19:03:17.266990  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.267858  538905 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1205 19:03:17.268598  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.268729  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.268762  538905 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1205 19:03:17.268930  538905 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1205 19:03:17.269000  538905 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1205 19:03:17.269058  538905 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1205 19:03:17.269747  538905 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1205 19:03:17.269776  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:17.269180  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:17.269424  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:17.269833  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.270201  538905 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 19:03:17.270219  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1205 19:03:17.270237  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:17.270336  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:17.270531  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:17.270703  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:17.271214  538905 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1205 19:03:17.271233  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1205 19:03:17.271250  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:17.271564  538905 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1205 19:03:17.271728  538905 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1205 19:03:17.271776  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:17.272088  538905 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1205 19:03:17.274717  538905 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 19:03:17.274745  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1205 19:03:17.274766  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:17.276130  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.276178  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36215
	I1205 19:03:17.276710  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.277058  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:17.277089  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.277302  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:17.277536  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:17.277696  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:17.277883  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:17.278013  538905 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1205 19:03:17.278397  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.278419  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.278799  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.278990  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.279551  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.279727  538905 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1205 19:03:17.279751  538905 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1205 19:03:17.279782  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:17.282341  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.284441  538905 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1205 19:03:17.286245  538905 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1205 19:03:17.286274  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1205 19:03:17.286304  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:17.287847  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.288505  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.288719  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39173
	I1205 19:03:17.288976  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:17.288999  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.289063  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33959
	I1205 19:03:17.289175  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.289236  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39351
	I1205 19:03:17.289830  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.289857  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.289939  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.289954  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.289957  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.290026  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:17.290505  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.290549  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:17.290569  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.290598  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.290598  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:17.290503  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.290641  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.290716  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:17.291374  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.291432  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.291490  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.291498  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:17.291505  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.291513  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.291538  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:17.291556  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:17.291609  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:17.291624  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.291654  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:17.291672  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.291805  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:17.291961  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:17.291998  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:17.292034  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:17.292227  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:17.292238  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.292331  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:17.292335  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:17.292262  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:17.292508  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:17.292565  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:17.292601  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:17.292638  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:17.292743  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:17.292889  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:17.292943  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.293760  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:17.293758  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:17.294020  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:17.294433  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.294873  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:17.294892  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.295094  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:17.295342  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.295392  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:17.295436  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.295955  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:17.296010  538905 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 19:03:17.296026  538905 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 19:03:17.296053  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:17.296131  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:17.296689  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43991
	I1205 19:03:17.297173  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.297713  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.297732  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.297756  538905 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1205 19:03:17.298293  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.298463  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.299415  538905 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 19:03:17.299434  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1205 19:03:17.299452  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:17.299483  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.300707  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:17.300732  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.300995  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:17.301197  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:17.301398  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:17.301532  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:17.303183  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.303639  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:17.303661  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.303909  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:17.304115  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:17.304253  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:17.304394  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:17.309978  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43233
	I1205 19:03:17.310453  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.311077  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.311103  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.311447  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.311706  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.313508  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.315532  538905 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1205 19:03:17.317242  538905 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1205 19:03:17.318690  538905 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1205 19:03:17.320299  538905 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1205 19:03:17.320540  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35631
	I1205 19:03:17.320966  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:17.321573  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:17.321597  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:17.321996  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:17.322244  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:17.323204  538905 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1205 19:03:17.324251  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:17.326052  538905 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1205 19:03:17.326052  538905 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1205 19:03:17.327988  538905 out.go:177]   - Using image docker.io/busybox:stable
	I1205 19:03:17.328022  538905 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1205 19:03:17.329516  538905 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 19:03:17.329542  538905 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1205 19:03:17.329542  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1205 19:03:17.329574  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:17.330987  538905 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1205 19:03:17.331050  538905 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1205 19:03:17.331081  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:17.333003  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.333437  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:17.333509  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.333679  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:17.333863  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:17.334040  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:17.334198  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:17.335147  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.335551  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:17.335572  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:17.335852  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:17.336035  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:17.336191  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:17.336373  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:17.724756  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 19:03:17.779152  538905 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1205 19:03:17.779189  538905 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1205 19:03:17.792340  538905 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1205 19:03:17.792377  538905 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1205 19:03:17.830257  538905 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:03:17.830281  538905 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 19:03:17.837448  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 19:03:17.850371  538905 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1205 19:03:17.850409  538905 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1205 19:03:17.858190  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 19:03:17.875074  538905 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1205 19:03:17.875110  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1205 19:03:17.887596  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1205 19:03:17.913718  538905 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 19:03:17.913744  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1205 19:03:17.930259  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1205 19:03:17.939407  538905 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1205 19:03:17.939437  538905 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1205 19:03:17.941437  538905 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1205 19:03:17.941462  538905 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1205 19:03:17.944453  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 19:03:17.972559  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 19:03:17.974862  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:03:18.051969  538905 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1205 19:03:18.052011  538905 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1205 19:03:18.083841  538905 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1205 19:03:18.083878  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1205 19:03:18.198115  538905 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1205 19:03:18.198153  538905 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1205 19:03:18.211074  538905 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1205 19:03:18.211113  538905 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1205 19:03:18.264034  538905 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 19:03:18.264074  538905 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 19:03:18.274961  538905 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1205 19:03:18.275008  538905 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1205 19:03:18.279065  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1205 19:03:18.332125  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1205 19:03:18.339471  538905 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1205 19:03:18.339500  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1205 19:03:18.523917  538905 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 19:03:18.523946  538905 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 19:03:18.537800  538905 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1205 19:03:18.537840  538905 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1205 19:03:18.537977  538905 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1205 19:03:18.538010  538905 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1205 19:03:18.662179  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1205 19:03:18.751655  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 19:03:18.857084  538905 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 19:03:18.857120  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1205 19:03:19.003627  538905 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1205 19:03:19.003663  538905 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1205 19:03:19.206662  538905 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1205 19:03:19.206701  538905 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1205 19:03:19.340888  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 19:03:19.607772  538905 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1205 19:03:19.607799  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1205 19:03:19.860002  538905 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1205 19:03:19.860041  538905 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1205 19:03:20.146466  538905 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1205 19:03:20.146501  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1205 19:03:20.537380  538905 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1205 19:03:20.537406  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1205 19:03:20.864745  538905 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 19:03:20.864778  538905 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1205 19:03:21.124829  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 19:03:21.301882  538905 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.471560987s)
	I1205 19:03:21.301923  538905 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1205 19:03:21.301925  538905 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.471626852s)
	I1205 19:03:21.301993  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.577184374s)
	I1205 19:03:21.302045  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:21.302061  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:21.302408  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:21.302424  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:21.302434  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:21.302441  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:21.302991  538905 node_ready.go:35] waiting up to 6m0s for node "addons-396564" to be "Ready" ...
	I1205 19:03:21.303143  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:21.303168  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:21.321487  538905 node_ready.go:49] node "addons-396564" has status "Ready":"True"
	I1205 19:03:21.321516  538905 node_ready.go:38] duration metric: took 18.493638ms for node "addons-396564" to be "Ready" ...
	I1205 19:03:21.321525  538905 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:03:21.346139  538905 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-xcvzc" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:21.864801  538905 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-396564" context rescaled to 1 replicas
	I1205 19:03:23.391110  538905 pod_ready.go:103] pod "amd-gpu-device-plugin-xcvzc" in "kube-system" namespace has status "Ready":"False"
	I1205 19:03:24.307070  538905 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1205 19:03:24.307112  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:24.310606  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:24.311058  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:24.311090  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:24.311317  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:24.311526  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:24.311697  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:24.311882  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:24.782838  538905 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1205 19:03:24.978764  538905 addons.go:234] Setting addon gcp-auth=true in "addons-396564"
	I1205 19:03:24.978836  538905 host.go:66] Checking if "addons-396564" exists ...
	I1205 19:03:24.979296  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:24.979338  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:24.995795  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43057
	I1205 19:03:24.996256  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:24.996764  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:24.996787  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:24.997194  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:24.997684  538905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:03:24.997715  538905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:03:25.013546  538905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42945
	I1205 19:03:25.014063  538905 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:03:25.014568  538905 main.go:141] libmachine: Using API Version  1
	I1205 19:03:25.014593  538905 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:03:25.014904  538905 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:03:25.015108  538905 main.go:141] libmachine: (addons-396564) Calling .GetState
	I1205 19:03:25.016691  538905 main.go:141] libmachine: (addons-396564) Calling .DriverName
	I1205 19:03:25.016960  538905 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1205 19:03:25.016991  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHHostname
	I1205 19:03:25.019536  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:25.019989  538905 main.go:141] libmachine: (addons-396564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:dd:b4", ip: ""} in network mk-addons-396564: {Iface:virbr1 ExpiryTime:2024-12-05 20:02:47 +0000 UTC Type:0 Mac:52:54:00:86:dd:b4 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-396564 Clientid:01:52:54:00:86:dd:b4}
	I1205 19:03:25.020018  538905 main.go:141] libmachine: (addons-396564) DBG | domain addons-396564 has defined IP address 192.168.39.9 and MAC address 52:54:00:86:dd:b4 in network mk-addons-396564
	I1205 19:03:25.020212  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHPort
	I1205 19:03:25.020465  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHKeyPath
	I1205 19:03:25.020663  538905 main.go:141] libmachine: (addons-396564) Calling .GetSSHUsername
	I1205 19:03:25.020822  538905 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/addons-396564/id_rsa Username:docker}
	I1205 19:03:25.414722  538905 pod_ready.go:103] pod "amd-gpu-device-plugin-xcvzc" in "kube-system" namespace has status "Ready":"False"
	I1205 19:03:27.148875  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.31137642s)
	I1205 19:03:27.148894  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.290667168s)
	I1205 19:03:27.148938  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.148952  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.148965  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.261328317s)
	I1205 19:03:27.148973  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.149047  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.149064  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (9.218759745s)
	I1205 19:03:27.149006  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.149084  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.149086  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.149098  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.149139  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.204658596s)
	I1205 19:03:27.149162  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.149172  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.149212  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.176621075s)
	I1205 19:03:27.149229  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.149237  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.149257  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.174370206s)
	I1205 19:03:27.149272  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.149280  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.149332  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (8.870241709s)
	I1205 19:03:27.149349  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.149356  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.149363  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.817208999s)
	I1205 19:03:27.149379  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.149387  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.149421  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.487212787s)
	I1205 19:03:27.149441  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.149451  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.149482  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.149489  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.397794878s)
	I1205 19:03:27.149507  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.149508  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.149518  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.149520  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.149528  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.149532  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.149535  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.149550  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.149561  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.149572  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.149579  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.149658  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.80872843s)
	I1205 19:03:27.149660  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.149674  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.149682  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.149691  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	W1205 19:03:27.149690  538905 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 19:03:27.149739  538905 retry.go:31] will retry after 170.150372ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 19:03:27.149800  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.149841  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.149848  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.149855  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.149862  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.150214  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.150231  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.150251  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.150258  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.150314  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.150323  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.150331  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.150338  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.150660  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.150699  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.150864  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.150876  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.150884  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.151384  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.151411  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.151418  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.151428  538905 addons.go:475] Verifying addon ingress=true in "addons-396564"
	I1205 19:03:27.152811  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.152846  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.152853  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.152861  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.152868  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.152917  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.152935  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.152941  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.152948  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.152954  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.153401  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.153430  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.153437  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.153447  538905 addons.go:475] Verifying addon registry=true in "addons-396564"
	I1205 19:03:27.154033  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.154065  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.154071  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.154078  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.154084  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.155127  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.155167  538905 out.go:177] * Verifying ingress addon...
	I1205 19:03:27.155350  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.155364  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.155501  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.155512  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.155522  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.155552  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.155559  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.155617  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.155639  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.155645  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.155686  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.155724  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.155730  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.155740  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.155746  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.155773  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.155795  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.155796  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.155805  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.155811  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.155834  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.155842  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.155883  538905 out.go:177] * Verifying registry addon...
	I1205 19:03:27.155955  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.155965  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.155995  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.156042  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.156049  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.156320  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:27.156384  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.156392  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.156400  538905 addons.go:475] Verifying addon metrics-server=true in "addons-396564"
	I1205 19:03:27.159282  538905 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-396564 service yakd-dashboard -n yakd-dashboard
	
	I1205 19:03:27.160258  538905 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1205 19:03:27.160285  538905 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1205 19:03:27.213140  538905 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 19:03:27.213171  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:27.215940  538905 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1205 19:03:27.215968  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:27.234522  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.234557  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.234935  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.234955  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	W1205 19:03:27.235070  538905 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1205 19:03:27.261016  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:27.261047  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:27.261341  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:27.261363  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:27.320084  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 19:03:27.675760  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:27.676774  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:27.857884  538905 pod_ready.go:103] pod "amd-gpu-device-plugin-xcvzc" in "kube-system" namespace has status "Ready":"False"
	I1205 19:03:28.172505  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:28.172512  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:28.213595  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.088690643s)
	I1205 19:03:28.213635  538905 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.196647686s)
	I1205 19:03:28.213666  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:28.213685  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:28.213980  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:28.213984  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:28.214001  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:28.214036  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:28.214045  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:28.214473  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:28.214528  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:28.214543  538905 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-396564"
	I1205 19:03:28.215861  538905 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1205 19:03:28.216866  538905 out.go:177] * Verifying csi-hostpath-driver addon...
	I1205 19:03:28.218797  538905 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1205 19:03:28.219941  538905 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1205 19:03:28.220343  538905 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1205 19:03:28.220372  538905 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1205 19:03:28.247520  538905 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 19:03:28.247545  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:28.312081  538905 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1205 19:03:28.312116  538905 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1205 19:03:28.397439  538905 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 19:03:28.397468  538905 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1205 19:03:28.490562  538905 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 19:03:28.664861  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:28.665427  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:28.724707  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:29.009861  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.689710769s)
	I1205 19:03:29.009943  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:29.009959  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:29.010391  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:29.010415  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:29.010425  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:29.010451  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:29.010510  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:29.010926  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:29.010948  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:29.010952  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:29.166360  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:29.167080  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:29.224362  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:29.671192  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:29.675772  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:29.757603  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:29.888009  538905 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.397401133s)
	I1205 19:03:29.888074  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:29.888086  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:29.888411  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:29.888494  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:29.888517  538905 main.go:141] libmachine: Making call to close driver server
	I1205 19:03:29.888519  538905 main.go:141] libmachine: (addons-396564) DBG | Closing plugin on server side
	I1205 19:03:29.888528  538905 main.go:141] libmachine: (addons-396564) Calling .Close
	I1205 19:03:29.888801  538905 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:03:29.888846  538905 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:03:29.890456  538905 addons.go:475] Verifying addon gcp-auth=true in "addons-396564"
	I1205 19:03:29.892199  538905 out.go:177] * Verifying gcp-auth addon...
	I1205 19:03:29.894491  538905 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1205 19:03:29.901008  538905 pod_ready.go:103] pod "amd-gpu-device-plugin-xcvzc" in "kube-system" namespace has status "Ready":"False"
	I1205 19:03:29.922960  538905 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1205 19:03:29.922994  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:30.168826  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:30.169312  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:30.272681  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:30.404420  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:30.668057  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:30.668830  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:30.768177  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:30.901474  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:31.165318  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:31.165506  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:31.225852  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:31.398371  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:31.665113  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:31.666922  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:31.724941  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:31.899258  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:32.165813  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:32.165958  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:32.225196  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:32.352784  538905 pod_ready.go:103] pod "amd-gpu-device-plugin-xcvzc" in "kube-system" namespace has status "Ready":"False"
	I1205 19:03:32.397888  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:32.678119  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:32.678669  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:32.724407  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:32.898533  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:33.164459  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:33.164888  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:33.226391  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:33.398836  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:33.665504  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:33.665695  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:33.727046  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:33.898412  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:34.166250  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:34.166280  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:34.225529  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:34.398691  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:34.665189  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:34.665420  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:34.725312  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:34.858878  538905 pod_ready.go:103] pod "amd-gpu-device-plugin-xcvzc" in "kube-system" namespace has status "Ready":"False"
	I1205 19:03:34.897867  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:35.165892  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:35.166074  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:35.225338  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:35.399215  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:35.667622  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:35.667861  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:35.724931  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:35.898337  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:36.167323  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:36.167894  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:36.226530  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:36.398930  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:36.664756  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:36.665830  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:36.725337  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:36.902761  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:37.166500  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:37.166672  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:37.225350  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:37.361251  538905 pod_ready.go:103] pod "amd-gpu-device-plugin-xcvzc" in "kube-system" namespace has status "Ready":"False"
	I1205 19:03:37.399348  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:37.666471  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:37.667929  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:37.727619  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:37.898659  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:38.165413  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:38.165773  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:38.225002  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:38.398811  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:38.665868  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:38.666379  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:38.724937  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:38.897904  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:39.169969  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:39.171046  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:39.267889  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:39.353755  538905 pod_ready.go:93] pod "amd-gpu-device-plugin-xcvzc" in "kube-system" namespace has status "Ready":"True"
	I1205 19:03:39.353780  538905 pod_ready.go:82] duration metric: took 18.007611624s for pod "amd-gpu-device-plugin-xcvzc" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:39.353791  538905 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hls42" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:39.356712  538905 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-hls42" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-hls42" not found
	I1205 19:03:39.356734  538905 pod_ready.go:82] duration metric: took 2.937552ms for pod "coredns-7c65d6cfc9-hls42" in "kube-system" namespace to be "Ready" ...
	E1205 19:03:39.356745  538905 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-hls42" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-hls42" not found
	I1205 19:03:39.356752  538905 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jz7lb" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:39.362331  538905 pod_ready.go:93] pod "coredns-7c65d6cfc9-jz7lb" in "kube-system" namespace has status "Ready":"True"
	I1205 19:03:39.362354  538905 pod_ready.go:82] duration metric: took 5.590777ms for pod "coredns-7c65d6cfc9-jz7lb" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:39.362364  538905 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-396564" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:39.366439  538905 pod_ready.go:93] pod "etcd-addons-396564" in "kube-system" namespace has status "Ready":"True"
	I1205 19:03:39.366457  538905 pod_ready.go:82] duration metric: took 4.085046ms for pod "etcd-addons-396564" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:39.366465  538905 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-396564" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:39.370515  538905 pod_ready.go:93] pod "kube-apiserver-addons-396564" in "kube-system" namespace has status "Ready":"True"
	I1205 19:03:39.370533  538905 pod_ready.go:82] duration metric: took 4.059957ms for pod "kube-apiserver-addons-396564" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:39.370541  538905 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-396564" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:39.398216  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:39.551873  538905 pod_ready.go:93] pod "kube-controller-manager-addons-396564" in "kube-system" namespace has status "Ready":"True"
	I1205 19:03:39.551902  538905 pod_ready.go:82] duration metric: took 181.352174ms for pod "kube-controller-manager-addons-396564" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:39.551917  538905 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r9sk8" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:39.665165  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:39.665337  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:39.725395  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:39.898316  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:39.950847  538905 pod_ready.go:93] pod "kube-proxy-r9sk8" in "kube-system" namespace has status "Ready":"True"
	I1205 19:03:39.950873  538905 pod_ready.go:82] duration metric: took 398.949152ms for pod "kube-proxy-r9sk8" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:39.950883  538905 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-396564" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:40.164597  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:40.167587  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:40.225583  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:40.350845  538905 pod_ready.go:93] pod "kube-scheduler-addons-396564" in "kube-system" namespace has status "Ready":"True"
	I1205 19:03:40.350872  538905 pod_ready.go:82] duration metric: took 399.983082ms for pod "kube-scheduler-addons-396564" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:40.350883  538905 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-pngv4" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:40.398572  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:40.666092  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:40.666584  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:40.725789  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:40.898728  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:41.165494  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:41.165930  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:41.224921  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:41.399355  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:41.666376  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:41.666828  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:41.724370  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:41.899184  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:42.168571  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:42.168801  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:42.226022  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:42.357785  538905 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-pngv4" in "kube-system" namespace has status "Ready":"False"
	I1205 19:03:42.397808  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:42.665153  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:42.666686  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:42.726451  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:42.898609  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:43.165012  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:43.165499  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:43.225987  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:43.398093  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:43.666133  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:43.666380  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:43.725867  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:43.898987  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:44.165607  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:44.167003  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:44.225504  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:44.399377  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:44.666241  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:44.666824  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:44.725679  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:44.857401  538905 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-pngv4" in "kube-system" namespace has status "Ready":"False"
	I1205 19:03:44.899114  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:45.165502  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:45.167101  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:45.225505  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:45.397873  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:45.665510  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:45.665816  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:45.724116  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:45.899057  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:46.165471  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:46.166833  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:46.225349  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:46.398338  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:46.930302  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:46.930796  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:46.930885  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:46.931867  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:46.936088  538905 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-pngv4" in "kube-system" namespace has status "Ready":"False"
	I1205 19:03:47.167978  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:47.168556  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:47.225358  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:47.398876  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:47.665371  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:47.665846  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:47.725691  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:47.898456  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:48.165222  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:48.166569  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:48.225020  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:48.399296  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:48.666193  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:48.668447  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:48.724921  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:48.898465  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:49.449560  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:49.449783  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:49.449805  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:49.453051  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:49.454348  538905 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-pngv4" in "kube-system" namespace has status "Ready":"False"
	I1205 19:03:49.663697  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:49.665037  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:49.724793  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:49.897904  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:50.164920  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:50.165343  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:50.225344  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:50.398459  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:50.695264  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:50.695483  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:50.997259  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:50.997805  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:51.165133  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:51.165167  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:51.224793  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:51.403994  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:51.666148  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:51.666284  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:51.725010  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:51.857649  538905 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-pngv4" in "kube-system" namespace has status "Ready":"False"
	I1205 19:03:51.898914  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:52.165548  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:52.165956  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:52.224531  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:52.398780  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:52.665463  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:52.665587  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:52.724214  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:52.898550  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:53.165811  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:53.165871  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:53.225496  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:53.397843  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:53.665453  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:53.665844  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:53.725052  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:53.898694  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:54.164902  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:54.165050  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:54.226000  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:54.356971  538905 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-pngv4" in "kube-system" namespace has status "Ready":"False"
	I1205 19:03:54.398410  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:54.664868  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:54.665252  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:54.724836  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:54.856716  538905 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-pngv4" in "kube-system" namespace has status "Ready":"True"
	I1205 19:03:54.856746  538905 pod_ready.go:82] duration metric: took 14.505855224s for pod "nvidia-device-plugin-daemonset-pngv4" in "kube-system" namespace to be "Ready" ...
	I1205 19:03:54.856758  538905 pod_ready.go:39] duration metric: took 33.53522113s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:03:54.856779  538905 api_server.go:52] waiting for apiserver process to appear ...
	I1205 19:03:54.856830  538905 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 19:03:54.874892  538905 api_server.go:72] duration metric: took 37.736246416s to wait for apiserver process to appear ...
	I1205 19:03:54.874930  538905 api_server.go:88] waiting for apiserver healthz status ...
	I1205 19:03:54.874954  538905 api_server.go:253] Checking apiserver healthz at https://192.168.39.9:8443/healthz ...
	I1205 19:03:54.879526  538905 api_server.go:279] https://192.168.39.9:8443/healthz returned 200:
	ok
	I1205 19:03:54.880515  538905 api_server.go:141] control plane version: v1.31.2
	I1205 19:03:54.880544  538905 api_server.go:131] duration metric: took 5.605685ms to wait for apiserver health ...
	I1205 19:03:54.880556  538905 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 19:03:54.889469  538905 system_pods.go:59] 18 kube-system pods found
	I1205 19:03:54.889501  538905 system_pods.go:61] "amd-gpu-device-plugin-xcvzc" [89313f55-0769-4cd7-af1d-e97c6833dcef] Running
	I1205 19:03:54.889507  538905 system_pods.go:61] "coredns-7c65d6cfc9-jz7lb" [56b461df-6acc-4973-9067-3d64d678111c] Running
	I1205 19:03:54.889514  538905 system_pods.go:61] "csi-hostpath-attacher-0" [8e9fadd5-acf2-477a-9d62-c47987d16129] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1205 19:03:54.889520  538905 system_pods.go:61] "csi-hostpath-resizer-0" [432c94bf-2efd-467c-95cb-1aa632b845cc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1205 19:03:54.889527  538905 system_pods.go:61] "csi-hostpathplugin-64t5f" [5be510d8-669e-43b3-9429-cfb59274f96d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 19:03:54.889531  538905 system_pods.go:61] "etcd-addons-396564" [6f990ebe-6f3e-4cd3-81d8-ba9f8b3013a3] Running
	I1205 19:03:54.889535  538905 system_pods.go:61] "kube-apiserver-addons-396564" [119c3cdb-12a4-45b6-a46a-b42bcc85bd84] Running
	I1205 19:03:54.889538  538905 system_pods.go:61] "kube-controller-manager-addons-396564" [83c3fd83-132d-4811-a930-3e91899ce37e] Running
	I1205 19:03:54.889545  538905 system_pods.go:61] "kube-ingress-dns-minikube" [364ca423-ae05-4a12-a6fc-11a86e3213ba] Running
	I1205 19:03:54.889549  538905 system_pods.go:61] "kube-proxy-r9sk8" [f3d31a62-b4c2-4d67-801b-a8623f03af65] Running
	I1205 19:03:54.889555  538905 system_pods.go:61] "kube-scheduler-addons-396564" [58a5b5ae-c488-445a-ae98-8396aae2efce] Running
	I1205 19:03:54.889560  538905 system_pods.go:61] "metrics-server-84c5f94fbc-p7wrj" [3aec8457-6ee0-4eeb-9abe-871b30996d06] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 19:03:54.889566  538905 system_pods.go:61] "nvidia-device-plugin-daemonset-pngv4" [53fc8bbc-5529-4aaf-81c2-c11c9b882577] Running
	I1205 19:03:54.889571  538905 system_pods.go:61] "registry-66c9cd494c-ljr8x" [0b9f7adc-96cd-4c61-aab5-70400f03a848] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1205 19:03:54.889578  538905 system_pods.go:61] "registry-proxy-jzvwd" [7d2f7d65-082f-42f9-a2e0-4329066b06c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1205 19:03:54.889584  538905 system_pods.go:61] "snapshot-controller-56fcc65765-4kxc6" [b3247ae3-203c-44f6-82e8-ef0144eb6497] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 19:03:54.889593  538905 system_pods.go:61] "snapshot-controller-56fcc65765-7w2w5" [b4d32957-684f-41c3-947a-ddc8a4d8fb33] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 19:03:54.889597  538905 system_pods.go:61] "storage-provisioner" [723d3daa-3e07-4da6-ab13-d88904d4c881] Running
	I1205 19:03:54.889605  538905 system_pods.go:74] duration metric: took 9.042981ms to wait for pod list to return data ...
	I1205 19:03:54.889615  538905 default_sa.go:34] waiting for default service account to be created ...
	I1205 19:03:54.892165  538905 default_sa.go:45] found service account: "default"
	I1205 19:03:54.892199  538905 default_sa.go:55] duration metric: took 2.570951ms for default service account to be created ...
	I1205 19:03:54.892211  538905 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 19:03:54.898297  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:54.899234  538905 system_pods.go:86] 18 kube-system pods found
	I1205 19:03:54.899261  538905 system_pods.go:89] "amd-gpu-device-plugin-xcvzc" [89313f55-0769-4cd7-af1d-e97c6833dcef] Running
	I1205 19:03:54.899269  538905 system_pods.go:89] "coredns-7c65d6cfc9-jz7lb" [56b461df-6acc-4973-9067-3d64d678111c] Running
	I1205 19:03:54.899276  538905 system_pods.go:89] "csi-hostpath-attacher-0" [8e9fadd5-acf2-477a-9d62-c47987d16129] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1205 19:03:54.899285  538905 system_pods.go:89] "csi-hostpath-resizer-0" [432c94bf-2efd-467c-95cb-1aa632b845cc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1205 19:03:54.899292  538905 system_pods.go:89] "csi-hostpathplugin-64t5f" [5be510d8-669e-43b3-9429-cfb59274f96d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 19:03:54.899297  538905 system_pods.go:89] "etcd-addons-396564" [6f990ebe-6f3e-4cd3-81d8-ba9f8b3013a3] Running
	I1205 19:03:54.899301  538905 system_pods.go:89] "kube-apiserver-addons-396564" [119c3cdb-12a4-45b6-a46a-b42bcc85bd84] Running
	I1205 19:03:54.899305  538905 system_pods.go:89] "kube-controller-manager-addons-396564" [83c3fd83-132d-4811-a930-3e91899ce37e] Running
	I1205 19:03:54.899310  538905 system_pods.go:89] "kube-ingress-dns-minikube" [364ca423-ae05-4a12-a6fc-11a86e3213ba] Running
	I1205 19:03:54.899313  538905 system_pods.go:89] "kube-proxy-r9sk8" [f3d31a62-b4c2-4d67-801b-a8623f03af65] Running
	I1205 19:03:54.899317  538905 system_pods.go:89] "kube-scheduler-addons-396564" [58a5b5ae-c488-445a-ae98-8396aae2efce] Running
	I1205 19:03:54.899322  538905 system_pods.go:89] "metrics-server-84c5f94fbc-p7wrj" [3aec8457-6ee0-4eeb-9abe-871b30996d06] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 19:03:54.899326  538905 system_pods.go:89] "nvidia-device-plugin-daemonset-pngv4" [53fc8bbc-5529-4aaf-81c2-c11c9b882577] Running
	I1205 19:03:54.899332  538905 system_pods.go:89] "registry-66c9cd494c-ljr8x" [0b9f7adc-96cd-4c61-aab5-70400f03a848] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1205 19:03:54.899339  538905 system_pods.go:89] "registry-proxy-jzvwd" [7d2f7d65-082f-42f9-a2e0-4329066b06c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1205 19:03:54.899345  538905 system_pods.go:89] "snapshot-controller-56fcc65765-4kxc6" [b3247ae3-203c-44f6-82e8-ef0144eb6497] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 19:03:54.899353  538905 system_pods.go:89] "snapshot-controller-56fcc65765-7w2w5" [b4d32957-684f-41c3-947a-ddc8a4d8fb33] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 19:03:54.899357  538905 system_pods.go:89] "storage-provisioner" [723d3daa-3e07-4da6-ab13-d88904d4c881] Running
	I1205 19:03:54.899366  538905 system_pods.go:126] duration metric: took 7.149025ms to wait for k8s-apps to be running ...
	I1205 19:03:54.899372  538905 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 19:03:54.899417  538905 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:03:54.914848  538905 system_svc.go:56] duration metric: took 15.466366ms WaitForService to wait for kubelet
	I1205 19:03:54.914878  538905 kubeadm.go:582] duration metric: took 37.776240851s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:03:54.914899  538905 node_conditions.go:102] verifying NodePressure condition ...
	I1205 19:03:54.917651  538905 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 19:03:54.917674  538905 node_conditions.go:123] node cpu capacity is 2
	I1205 19:03:54.917690  538905 node_conditions.go:105] duration metric: took 2.786458ms to run NodePressure ...
	I1205 19:03:54.917706  538905 start.go:241] waiting for startup goroutines ...
	I1205 19:03:55.164312  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:55.164821  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:55.225883  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:55.398837  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:55.665474  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:55.665802  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:55.724424  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:55.898156  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:56.164634  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:56.165246  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:56.225110  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:56.398336  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:56.665300  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:56.665319  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:56.725528  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:56.897992  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:57.165345  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:57.165806  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:57.224360  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:57.397847  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:57.666268  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:57.666639  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:57.725442  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:57.898497  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:58.165644  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:58.165843  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:58.224791  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:58.398326  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:58.665026  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:58.665257  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:58.725043  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:58.898862  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:59.165732  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:59.165988  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:59.266568  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:59.398469  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:03:59.665292  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:03:59.665711  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:03:59.724368  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:03:59.898445  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:00.164587  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:00.167195  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:00.224703  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:00.398197  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:00.666896  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:00.667969  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:00.725135  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:00.899149  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:01.165709  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:01.166155  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:01.225835  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:01.399175  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:01.665075  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:01.665369  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:01.725434  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:01.898507  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:02.165066  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:02.165359  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:02.225230  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:02.400030  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:02.666599  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:02.666781  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:02.724705  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:02.898481  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:03.165483  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:03.166821  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:03.224523  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:03.397978  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:03.997203  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:03.997786  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:03.998069  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:03.998725  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:04.166828  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:04.166980  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:04.266952  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:04.398158  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:04.671559  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:04.672179  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:04.726431  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:04.898086  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:05.165525  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:05.166502  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:05.225740  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:05.398295  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:05.665295  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:05.665689  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:05.724032  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:05.899217  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:06.166632  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:06.170111  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:06.225314  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:06.398267  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:06.666211  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:06.666807  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:06.725518  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:06.902145  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:07.165700  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:07.165933  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:07.226781  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:07.398071  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:07.666118  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:07.668077  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:07.724402  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:07.898055  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:08.165497  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:08.166641  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:08.229895  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:08.399016  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:08.666242  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:04:08.666404  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:08.767248  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:08.898544  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:09.164339  538905 kapi.go:107] duration metric: took 42.004068263s to wait for kubernetes.io/minikube-addons=registry ...
	I1205 19:04:09.165390  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:09.225258  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:09.398855  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:09.670280  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:09.769065  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:09.898896  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:10.164923  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:10.225150  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:10.398263  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:10.665538  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:10.725047  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:10.898920  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:11.165432  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:11.225634  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:11.399243  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:11.671241  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:11.725004  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:11.898410  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:12.165562  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:12.225442  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:12.397936  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:12.665118  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:12.724555  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:12.898425  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:13.165883  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:13.226200  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:13.398236  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:14.034612  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:14.135982  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:14.136507  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:14.165041  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:14.224602  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:14.399017  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:14.665928  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:14.725428  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:14.899134  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:15.164860  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:15.225889  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:15.399216  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:15.665319  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:15.725113  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:15.901299  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:16.165566  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:16.225544  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:16.397899  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:16.665199  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:16.725317  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:16.897452  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:17.165569  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:17.224832  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:17.399071  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:17.664950  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:17.725115  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:17.899596  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:18.165694  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:18.225583  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:18.398486  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:18.664612  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:18.725305  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:18.897615  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:19.165462  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:19.225858  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:19.398641  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:19.664752  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:19.724521  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:19.897768  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:20.165474  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:20.267007  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:20.398501  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:20.664825  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:20.725221  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:20.899486  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:21.165586  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:21.266966  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:21.399517  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:21.664898  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:21.724711  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:21.898565  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:22.164990  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:22.224947  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:22.398445  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:22.665832  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:22.725004  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:22.898638  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:23.170481  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:23.227775  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:23.398305  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:23.665830  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:23.725184  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:23.898931  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:24.166543  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:24.225766  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:24.397870  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:24.666191  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:24.724692  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:24.899425  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:25.166799  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:25.267556  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:25.398956  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:25.665028  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:25.727668  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:25.897758  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:26.168078  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:26.224700  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:26.398115  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:26.665501  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:27.066893  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:27.067207  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:27.165308  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:27.225588  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:27.398700  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:27.666283  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:27.725336  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:27.899011  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:28.165639  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:28.225557  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:28.397914  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:28.665286  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:28.724940  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:28.900600  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:29.167471  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:29.270703  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:29.397771  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:29.665673  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:29.725171  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:29.899161  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:30.165255  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:30.224665  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:30.398318  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:30.666706  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:30.724698  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:30.898081  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:31.165143  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:31.224466  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:31.399405  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:31.665641  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:31.724771  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:31.898834  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:32.177744  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:32.228016  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:32.399308  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:32.669092  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:32.728758  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:32.904324  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:33.168403  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:33.225572  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:33.398522  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:33.664422  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:33.724615  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:33.906701  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:34.165670  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:34.266735  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:34.399539  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:34.664853  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:34.728466  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:34.897970  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:35.165033  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:35.225220  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:35.398988  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:35.668994  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:35.771182  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:35.899200  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:36.165409  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:36.225713  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:36.400010  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:36.666213  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:36.724911  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:36.898466  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:37.165988  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:37.224834  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:37.398577  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:37.664930  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:37.724924  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:37.898478  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:38.164386  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:38.225384  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:38.397527  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:38.664678  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:38.725313  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:38.898134  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:04:39.166334  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:39.267619  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:39.441602  538905 kapi.go:107] duration metric: took 1m9.547108072s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1205 19:04:39.443644  538905 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-396564 cluster.
	I1205 19:04:39.445116  538905 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1205 19:04:39.446702  538905 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1205 19:04:39.669011  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:39.726880  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:40.167397  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:40.273060  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:40.665856  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:40.725491  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:41.165920  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:41.224752  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:41.841987  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:41.845937  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:42.166053  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:42.267570  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:42.668896  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:42.725172  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:43.165563  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:43.225351  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:43.665697  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:43.724972  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:44.165507  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:44.225155  538905 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:04:44.682526  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:44.725890  538905 kapi.go:107] duration metric: took 1m16.505944912s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1205 19:04:45.165247  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:45.665445  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:46.165901  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:46.664963  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:47.166768  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:47.665295  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:48.165662  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:48.665060  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:49.164465  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:49.665847  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:50.165029  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:50.665470  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:51.166214  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:51.666110  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:52.165955  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:52.665616  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:53.164548  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:53.665363  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:54.166091  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:54.665385  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:55.165481  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:55.665754  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:56.165401  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:56.666064  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:57.165387  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:57.665025  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:58.165053  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:58.665280  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:59.165210  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:04:59.666525  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:00.165751  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:00.665949  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:01.165765  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:01.665296  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:02.165686  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:02.665470  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:03.165585  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:03.664890  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:04.164960  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:04.664924  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:05.166274  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:05.665557  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:06.166219  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:06.665374  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:07.165595  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:07.665000  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:08.165070  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:08.665589  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:09.164864  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:09.674144  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:10.165463  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:10.665644  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:11.164727  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:11.664817  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:12.165512  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:12.665873  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:13.165154  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:13.665080  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:14.165250  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:14.665248  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:15.164308  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:15.665437  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:16.165253  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:16.665333  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:17.165379  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:17.665128  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:18.165552  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:18.665677  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:19.164813  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:19.665378  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:20.165393  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:20.666191  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:21.164654  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:21.666380  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:22.165923  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:22.665444  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:23.165731  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:23.665878  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:24.165283  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:24.665426  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:25.165191  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:25.665781  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:26.165726  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:26.665719  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:27.164846  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:27.666066  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:28.164694  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:28.665177  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:29.165463  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:29.665549  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:30.165823  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:30.665429  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:31.165351  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:31.665835  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:32.165182  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:32.665500  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:33.165714  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:33.664865  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:34.164570  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:34.665185  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:35.165383  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:35.665724  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:36.166306  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:36.666624  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:37.165304  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:37.665551  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:38.165799  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:38.664772  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:39.164683  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:39.664394  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:40.165415  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:40.668709  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:41.164405  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:41.665845  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:42.166465  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:42.664740  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:43.165133  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:43.666272  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:44.165881  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:44.663967  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:45.166488  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:45.666195  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:46.165093  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:46.666035  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:47.164865  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:47.664737  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:48.164464  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:48.665655  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:49.578460  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:49.665361  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:50.167915  538905 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:05:50.665652  538905 kapi.go:107] duration metric: took 2m23.505391143s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1205 19:05:50.667912  538905 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, inspektor-gadget, cloud-spanner, amd-gpu-device-plugin, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1205 19:05:50.669483  538905 addons.go:510] duration metric: took 2m33.530805777s for enable addons: enabled=[ingress-dns nvidia-device-plugin inspektor-gadget cloud-spanner amd-gpu-device-plugin storage-provisioner metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1205 19:05:50.669538  538905 start.go:246] waiting for cluster config update ...
	I1205 19:05:50.669559  538905 start.go:255] writing updated cluster config ...
	I1205 19:05:50.669873  538905 ssh_runner.go:195] Run: rm -f paused
	I1205 19:05:50.724736  538905 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 19:05:50.726653  538905 out.go:177] * Done! kubectl is now configured to use "addons-396564" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 19:11:55 addons-396564 crio[665]: time="2024-12-05 19:11:55.626481736Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cde5f005-f49d-49a7-b37d-6729198cc873 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:11:55 addons-396564 crio[665]: time="2024-12-05 19:11:55.628073389Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=31eb4df8-422c-4013-9467-b94aab0d797a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:11:55 addons-396564 crio[665]: time="2024-12-05 19:11:55.629266915Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425915629240206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=31eb4df8-422c-4013-9467-b94aab0d797a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:11:55 addons-396564 crio[665]: time="2024-12-05 19:11:55.629931646Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=105056da-b3f8-4d48-be8f-0c18052300b0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:11:55 addons-396564 crio[665]: time="2024-12-05 19:11:55.629986493Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=105056da-b3f8-4d48-be8f-0c18052300b0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:11:55 addons-396564 crio[665]: time="2024-12-05 19:11:55.630234092Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae26c7626bfb973ca1f5249ef2ad2263d0bd1e90cafa018d74256e03b3d86547,PodSandboxId:04e08f6f6c383ba7e7c91abfdb78f3c3d907d88f9d0d9d06254b935c85c0d0c3,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733425749139238289,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-z824g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a3b29e45-9d9f-400e-ac12-8846b47d56a4,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1420a7c820a076fdfced7aacfe1fccedb6314e31c747d81513ddf7e07b6895c5,PodSandboxId:ec3d843ef852d8683c160e42860bc8d8cdbb361d7ced78dc57559b0dccf91ed6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733425606318379684,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e772fa6-e5dd-49b1-a470-bdca82384b0b,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80305a2941c96368ed3244f100796fdded119c2ef7516e38ba7e3668377e6e57,PodSandboxId:ba639c8e211c850ecc057e571093b6dc0d4934d07509176d09837847b4bb38ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733425554801287054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0ab9fb43-6d1a-4c93-b
7a8-53945e058344,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50d9c2bd4f7c85bc5ebb19c9c273d508e1301d4e34d5e88e109b1981a40a79b,PodSandboxId:0a2f2f63a4115b01d236445efb49e4bdbcf925902d238e9ccd32f703863a2355,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733425444128224343,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-p7wrj,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 3aec8457-6ee0-4eeb-9abe-871b30996d06,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b6e3a4fc29407b5843faf06977ce6db4e1a5bbdd36df7bcfc91433c4d9799c,PodSandboxId:dc1450352831648af2fa98196420623803e3dc20de44f6ad344378a1588ee48a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733425418424303370,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-xcvzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89313f55-0769-4cd7-af1d-e97c6833dcef,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbae14404a1302fb8ac2eb0ca8137ed78f59059acd980e3980d42a29c87e08f9,PodSandboxId:4e52b6a4649ac22d5f69e6f436af0c8ed5d609cd11284a474260bbde2bc2960b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733425404286986575,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 723d3daa-3e07-4da6-ab13-d88904d4c881,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:789a4b25d853bff2e452046d0bbe30daa5dd750b4815b30aef8774af9201999b,PodSandboxId:dbe32b06c8a1b21a3663371d35b001cd90a3d208b335bff5ac6850e86d92421f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733425401627513876,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jz7lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b461df-6acc-4973-9067-3d64d678111c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25818b4b391668a26fc48ea3644968bead720d71a2521a4cf15faef5dcf7db75,PodSandboxId:5cf2096e393a1bfc1f8315d3451d888d501c7de4fcfb2b1a1d55e820e9099042,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733425398081799975,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r9sk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d31a62-b4c2-4d67-801b-a8623f03af65,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:555285d2c5baa83e8a141e31cb63b1ecf7f24747793d17bc184d077aa32d32ff,PodSandboxId:8390342f3267cb0ec0781d40326f871cc206876b0335b89eb5303cce9eddc54b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733425387149945176,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03644608f10a19dcf721eb0920007288,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e16b5c2bcff1353331c9f81ed262d909e8ba9ec868b3ad9d3a34b228ba38fe53,PodSandboxId:0ea5f64294d5938655208e3040270b0bda77dee532116d1f88002dc1c1901133,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e591
3fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733425387132208911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2749c1e2468930ab4ed523429fa7366,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acbbe2cd3da913dd2fca7ca3cb2016f3e74b87dcffb1f4e7915d49104984e28e,PodSandboxId:eef6ab607009ddac22a66acc75797bc43630ff9d833bbdc333ff952afdd8d37f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530
fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733425387097760274,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce9e3111d0ed63fd4adf56f8ae1a972,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d573ee316398f9d872687b34b67057b642f060e9f6be1e2047272f413f522cc6,PodSandboxId:593ce4fc9d33b9231528e59a66954446b8bc8d05ac419e1be5f5caed8bcef141,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a491
73,State:CONTAINER_RUNNING,CreatedAt:1733425387066497445,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3feb065f68b065eae6360503f017d29d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=105056da-b3f8-4d48-be8f-0c18052300b0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:11:55 addons-396564 crio[665]: time="2024-12-05 19:11:55.669559500Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b840af75-9c6c-4e50-b0db-7c8aa1c78273 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:11:55 addons-396564 crio[665]: time="2024-12-05 19:11:55.669636436Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b840af75-9c6c-4e50-b0db-7c8aa1c78273 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:11:55 addons-396564 crio[665]: time="2024-12-05 19:11:55.670773408Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=91df78fb-718c-469b-9add-a0dda6357816 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:11:55 addons-396564 crio[665]: time="2024-12-05 19:11:55.671959448Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425915671931932,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=91df78fb-718c-469b-9add-a0dda6357816 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:11:55 addons-396564 crio[665]: time="2024-12-05 19:11:55.672787835Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2cf02983-6d08-48ce-adbb-7e4130877674 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:11:55 addons-396564 crio[665]: time="2024-12-05 19:11:55.672842973Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2cf02983-6d08-48ce-adbb-7e4130877674 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:11:55 addons-396564 crio[665]: time="2024-12-05 19:11:55.673101572Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae26c7626bfb973ca1f5249ef2ad2263d0bd1e90cafa018d74256e03b3d86547,PodSandboxId:04e08f6f6c383ba7e7c91abfdb78f3c3d907d88f9d0d9d06254b935c85c0d0c3,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733425749139238289,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-z824g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a3b29e45-9d9f-400e-ac12-8846b47d56a4,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1420a7c820a076fdfced7aacfe1fccedb6314e31c747d81513ddf7e07b6895c5,PodSandboxId:ec3d843ef852d8683c160e42860bc8d8cdbb361d7ced78dc57559b0dccf91ed6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733425606318379684,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e772fa6-e5dd-49b1-a470-bdca82384b0b,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80305a2941c96368ed3244f100796fdded119c2ef7516e38ba7e3668377e6e57,PodSandboxId:ba639c8e211c850ecc057e571093b6dc0d4934d07509176d09837847b4bb38ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733425554801287054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0ab9fb43-6d1a-4c93-b
7a8-53945e058344,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50d9c2bd4f7c85bc5ebb19c9c273d508e1301d4e34d5e88e109b1981a40a79b,PodSandboxId:0a2f2f63a4115b01d236445efb49e4bdbcf925902d238e9ccd32f703863a2355,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733425444128224343,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-p7wrj,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 3aec8457-6ee0-4eeb-9abe-871b30996d06,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b6e3a4fc29407b5843faf06977ce6db4e1a5bbdd36df7bcfc91433c4d9799c,PodSandboxId:dc1450352831648af2fa98196420623803e3dc20de44f6ad344378a1588ee48a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733425418424303370,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-xcvzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89313f55-0769-4cd7-af1d-e97c6833dcef,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbae14404a1302fb8ac2eb0ca8137ed78f59059acd980e3980d42a29c87e08f9,PodSandboxId:4e52b6a4649ac22d5f69e6f436af0c8ed5d609cd11284a474260bbde2bc2960b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733425404286986575,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 723d3daa-3e07-4da6-ab13-d88904d4c881,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:789a4b25d853bff2e452046d0bbe30daa5dd750b4815b30aef8774af9201999b,PodSandboxId:dbe32b06c8a1b21a3663371d35b001cd90a3d208b335bff5ac6850e86d92421f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733425401627513876,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jz7lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b461df-6acc-4973-9067-3d64d678111c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25818b4b391668a26fc48ea3644968bead720d71a2521a4cf15faef5dcf7db75,PodSandboxId:5cf2096e393a1bfc1f8315d3451d888d501c7de4fcfb2b1a1d55e820e9099042,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733425398081799975,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r9sk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d31a62-b4c2-4d67-801b-a8623f03af65,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:555285d2c5baa83e8a141e31cb63b1ecf7f24747793d17bc184d077aa32d32ff,PodSandboxId:8390342f3267cb0ec0781d40326f871cc206876b0335b89eb5303cce9eddc54b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733425387149945176,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03644608f10a19dcf721eb0920007288,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e16b5c2bcff1353331c9f81ed262d909e8ba9ec868b3ad9d3a34b228ba38fe53,PodSandboxId:0ea5f64294d5938655208e3040270b0bda77dee532116d1f88002dc1c1901133,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e591
3fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733425387132208911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2749c1e2468930ab4ed523429fa7366,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acbbe2cd3da913dd2fca7ca3cb2016f3e74b87dcffb1f4e7915d49104984e28e,PodSandboxId:eef6ab607009ddac22a66acc75797bc43630ff9d833bbdc333ff952afdd8d37f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530
fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733425387097760274,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce9e3111d0ed63fd4adf56f8ae1a972,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d573ee316398f9d872687b34b67057b642f060e9f6be1e2047272f413f522cc6,PodSandboxId:593ce4fc9d33b9231528e59a66954446b8bc8d05ac419e1be5f5caed8bcef141,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a491
73,State:CONTAINER_RUNNING,CreatedAt:1733425387066497445,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3feb065f68b065eae6360503f017d29d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2cf02983-6d08-48ce-adbb-7e4130877674 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:11:55 addons-396564 crio[665]: time="2024-12-05 19:11:55.709466184Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2c1fc7d3-9c5f-49d2-b3f4-546758116e5f name=/runtime.v1.RuntimeService/Version
	Dec 05 19:11:55 addons-396564 crio[665]: time="2024-12-05 19:11:55.709542736Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2c1fc7d3-9c5f-49d2-b3f4-546758116e5f name=/runtime.v1.RuntimeService/Version
	Dec 05 19:11:55 addons-396564 crio[665]: time="2024-12-05 19:11:55.711135634Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6bb53e2e-63a9-42d4-9fd7-546ff410f725 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:11:55 addons-396564 crio[665]: time="2024-12-05 19:11:55.712313481Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425915712289990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6bb53e2e-63a9-42d4-9fd7-546ff410f725 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:11:55 addons-396564 crio[665]: time="2024-12-05 19:11:55.712874760Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d284bae-fde0-4008-984d-95ac0242ee82 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:11:55 addons-396564 crio[665]: time="2024-12-05 19:11:55.712947972Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d284bae-fde0-4008-984d-95ac0242ee82 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:11:55 addons-396564 crio[665]: time="2024-12-05 19:11:55.713218695Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae26c7626bfb973ca1f5249ef2ad2263d0bd1e90cafa018d74256e03b3d86547,PodSandboxId:04e08f6f6c383ba7e7c91abfdb78f3c3d907d88f9d0d9d06254b935c85c0d0c3,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733425749139238289,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-z824g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a3b29e45-9d9f-400e-ac12-8846b47d56a4,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1420a7c820a076fdfced7aacfe1fccedb6314e31c747d81513ddf7e07b6895c5,PodSandboxId:ec3d843ef852d8683c160e42860bc8d8cdbb361d7ced78dc57559b0dccf91ed6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733425606318379684,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e772fa6-e5dd-49b1-a470-bdca82384b0b,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80305a2941c96368ed3244f100796fdded119c2ef7516e38ba7e3668377e6e57,PodSandboxId:ba639c8e211c850ecc057e571093b6dc0d4934d07509176d09837847b4bb38ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733425554801287054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0ab9fb43-6d1a-4c93-b
7a8-53945e058344,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50d9c2bd4f7c85bc5ebb19c9c273d508e1301d4e34d5e88e109b1981a40a79b,PodSandboxId:0a2f2f63a4115b01d236445efb49e4bdbcf925902d238e9ccd32f703863a2355,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733425444128224343,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-p7wrj,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 3aec8457-6ee0-4eeb-9abe-871b30996d06,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b6e3a4fc29407b5843faf06977ce6db4e1a5bbdd36df7bcfc91433c4d9799c,PodSandboxId:dc1450352831648af2fa98196420623803e3dc20de44f6ad344378a1588ee48a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733425418424303370,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-xcvzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89313f55-0769-4cd7-af1d-e97c6833dcef,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbae14404a1302fb8ac2eb0ca8137ed78f59059acd980e3980d42a29c87e08f9,PodSandboxId:4e52b6a4649ac22d5f69e6f436af0c8ed5d609cd11284a474260bbde2bc2960b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733425404286986575,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 723d3daa-3e07-4da6-ab13-d88904d4c881,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:789a4b25d853bff2e452046d0bbe30daa5dd750b4815b30aef8774af9201999b,PodSandboxId:dbe32b06c8a1b21a3663371d35b001cd90a3d208b335bff5ac6850e86d92421f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733425401627513876,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jz7lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b461df-6acc-4973-9067-3d64d678111c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25818b4b391668a26fc48ea3644968bead720d71a2521a4cf15faef5dcf7db75,PodSandboxId:5cf2096e393a1bfc1f8315d3451d888d501c7de4fcfb2b1a1d55e820e9099042,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733425398081799975,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r9sk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d31a62-b4c2-4d67-801b-a8623f03af65,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:555285d2c5baa83e8a141e31cb63b1ecf7f24747793d17bc184d077aa32d32ff,PodSandboxId:8390342f3267cb0ec0781d40326f871cc206876b0335b89eb5303cce9eddc54b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733425387149945176,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03644608f10a19dcf721eb0920007288,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e16b5c2bcff1353331c9f81ed262d909e8ba9ec868b3ad9d3a34b228ba38fe53,PodSandboxId:0ea5f64294d5938655208e3040270b0bda77dee532116d1f88002dc1c1901133,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e591
3fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733425387132208911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2749c1e2468930ab4ed523429fa7366,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acbbe2cd3da913dd2fca7ca3cb2016f3e74b87dcffb1f4e7915d49104984e28e,PodSandboxId:eef6ab607009ddac22a66acc75797bc43630ff9d833bbdc333ff952afdd8d37f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530
fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733425387097760274,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce9e3111d0ed63fd4adf56f8ae1a972,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d573ee316398f9d872687b34b67057b642f060e9f6be1e2047272f413f522cc6,PodSandboxId:593ce4fc9d33b9231528e59a66954446b8bc8d05ac419e1be5f5caed8bcef141,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a491
73,State:CONTAINER_RUNNING,CreatedAt:1733425387066497445,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3feb065f68b065eae6360503f017d29d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d284bae-fde0-4008-984d-95ac0242ee82 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:11:55 addons-396564 crio[665]: time="2024-12-05 19:11:55.736513668Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=09ec0a99-9bbe-47c2-b0ba-b9fd10032193 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 05 19:11:55 addons-396564 crio[665]: time="2024-12-05 19:11:55.737072412Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:04e08f6f6c383ba7e7c91abfdb78f3c3d907d88f9d0d9d06254b935c85c0d0c3,Metadata:&PodSandboxMetadata{Name:hello-world-app-55bf9c44b4-z824g,Uid:a3b29e45-9d9f-400e-ac12-8846b47d56a4,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733425746263269607,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-z824g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a3b29e45-9d9f-400e-ac12-8846b47d56a4,pod-template-hash: 55bf9c44b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T19:09:05.939857476Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ec3d843ef852d8683c160e42860bc8d8cdbb361d7ced78dc57559b0dccf91ed6,Metadata:&PodSandboxMetadata{Name:nginx,Uid:8e772fa6-e5dd-49b1-a470-bdca82384b0b,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1733425601741185322,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e772fa6-e5dd-49b1-a470-bdca82384b0b,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T19:06:41.420213420Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ba639c8e211c850ecc057e571093b6dc0d4934d07509176d09837847b4bb38ee,Metadata:&PodSandboxMetadata{Name:busybox,Uid:0ab9fb43-6d1a-4c93-b7a8-53945e058344,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733425551618335736,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0ab9fb43-6d1a-4c93-b7a8-53945e058344,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T19:05:51.308976996Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0a2f2f63a4115b01d2
36445efb49e4bdbcf925902d238e9ccd32f703863a2355,Metadata:&PodSandboxMetadata{Name:metrics-server-84c5f94fbc-p7wrj,Uid:3aec8457-6ee0-4eeb-9abe-871b30996d06,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733425403725357760,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-84c5f94fbc-p7wrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aec8457-6ee0-4eeb-9abe-871b30996d06,k8s-app: metrics-server,pod-template-hash: 84c5f94fbc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T19:03:23.114128674Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4e52b6a4649ac22d5f69e6f436af0c8ed5d609cd11284a474260bbde2bc2960b,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:723d3daa-3e07-4da6-ab13-d88904d4c881,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733425403187250802,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kuber
netes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 723d3daa-3e07-4da6-ab13-d88904d4c881,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-12-05T19:03:22.801140018Z,kubernetes.io/config.source: api,},RuntimeHandler:,
},&PodSandbox{Id:dc1450352831648af2fa98196420623803e3dc20de44f6ad344378a1588ee48a,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-xcvzc,Uid:89313f55-0769-4cd7-af1d-e97c6833dcef,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733425400015420531,Labels:map[string]string{controller-revision-hash: 59cf7d9b45,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-xcvzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89313f55-0769-4cd7-af1d-e97c6833dcef,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T19:03:19.693799159Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dbe32b06c8a1b21a3663371d35b001cd90a3d208b335bff5ac6850e86d92421f,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-jz7lb,Uid:56b461df-6acc-4973-9067-3d64d678111c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733425397736491098,Labels:map[st
ring]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-jz7lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b461df-6acc-4973-9067-3d64d678111c,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T19:03:17.426265596Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5cf2096e393a1bfc1f8315d3451d888d501c7de4fcfb2b1a1d55e820e9099042,Metadata:&PodSandboxMetadata{Name:kube-proxy-r9sk8,Uid:f3d31a62-b4c2-4d67-801b-a8623f03af65,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733425397284499876,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-r9sk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d31a62-b4c2-4d67-801b-a8623f03af65,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T19:03:16.312687260Z,kubern
etes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:593ce4fc9d33b9231528e59a66954446b8bc8d05ac419e1be5f5caed8bcef141,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-396564,Uid:3feb065f68b065eae6360503f017d29d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733425386889478432,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3feb065f68b065eae6360503f017d29d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.9:8443,kubernetes.io/config.hash: 3feb065f68b065eae6360503f017d29d,kubernetes.io/config.seen: 2024-12-05T19:03:06.225781831Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8390342f3267cb0ec0781d40326f871cc206876b0335b89eb5303cce9eddc54b,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-396564,Uid:03644608f10a19d
cf721eb0920007288,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733425386883380506,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03644608f10a19dcf721eb0920007288,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 03644608f10a19dcf721eb0920007288,kubernetes.io/config.seen: 2024-12-05T19:03:06.225785734Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0ea5f64294d5938655208e3040270b0bda77dee532116d1f88002dc1c1901133,Metadata:&PodSandboxMetadata{Name:etcd-addons-396564,Uid:e2749c1e2468930ab4ed523429fa7366,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733425386881695259,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2749c1e2
468930ab4ed523429fa7366,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.9:2379,kubernetes.io/config.hash: e2749c1e2468930ab4ed523429fa7366,kubernetes.io/config.seen: 2024-12-05T19:03:06.225787857Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:eef6ab607009ddac22a66acc75797bc43630ff9d833bbdc333ff952afdd8d37f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-396564,Uid:3ce9e3111d0ed63fd4adf56f8ae1a972,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733425386880142775,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce9e3111d0ed63fd4adf56f8ae1a972,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3ce9e3111d0ed63fd4adf56f8ae1a972,kubernetes.io/config.seen: 2024-12-05T19:03:06.225786679Z,kubernetes.io/config.source: fi
le,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=09ec0a99-9bbe-47c2-b0ba-b9fd10032193 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 05 19:11:55 addons-396564 crio[665]: time="2024-12-05 19:11:55.737792796Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9329e97b-e95a-4289-b27f-4b48ef24ded9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:11:55 addons-396564 crio[665]: time="2024-12-05 19:11:55.737870379Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9329e97b-e95a-4289-b27f-4b48ef24ded9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:11:55 addons-396564 crio[665]: time="2024-12-05 19:11:55.738130086Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae26c7626bfb973ca1f5249ef2ad2263d0bd1e90cafa018d74256e03b3d86547,PodSandboxId:04e08f6f6c383ba7e7c91abfdb78f3c3d907d88f9d0d9d06254b935c85c0d0c3,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733425749139238289,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-z824g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a3b29e45-9d9f-400e-ac12-8846b47d56a4,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1420a7c820a076fdfced7aacfe1fccedb6314e31c747d81513ddf7e07b6895c5,PodSandboxId:ec3d843ef852d8683c160e42860bc8d8cdbb361d7ced78dc57559b0dccf91ed6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733425606318379684,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e772fa6-e5dd-49b1-a470-bdca82384b0b,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80305a2941c96368ed3244f100796fdded119c2ef7516e38ba7e3668377e6e57,PodSandboxId:ba639c8e211c850ecc057e571093b6dc0d4934d07509176d09837847b4bb38ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733425554801287054,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0ab9fb43-6d1a-4c93-b
7a8-53945e058344,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50d9c2bd4f7c85bc5ebb19c9c273d508e1301d4e34d5e88e109b1981a40a79b,PodSandboxId:0a2f2f63a4115b01d236445efb49e4bdbcf925902d238e9ccd32f703863a2355,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733425444128224343,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-p7wrj,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 3aec8457-6ee0-4eeb-9abe-871b30996d06,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b6e3a4fc29407b5843faf06977ce6db4e1a5bbdd36df7bcfc91433c4d9799c,PodSandboxId:dc1450352831648af2fa98196420623803e3dc20de44f6ad344378a1588ee48a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733425418424303370,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-xcvzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89313f55-0769-4cd7-af1d-e97c6833dcef,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbae14404a1302fb8ac2eb0ca8137ed78f59059acd980e3980d42a29c87e08f9,PodSandboxId:4e52b6a4649ac22d5f69e6f436af0c8ed5d609cd11284a474260bbde2bc2960b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733425404286986575,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 723d3daa-3e07-4da6-ab13-d88904d4c881,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:789a4b25d853bff2e452046d0bbe30daa5dd750b4815b30aef8774af9201999b,PodSandboxId:dbe32b06c8a1b21a3663371d35b001cd90a3d208b335bff5ac6850e86d92421f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733425401627513876,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jz7lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b461df-6acc-4973-9067-3d64d678111c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25818b4b391668a26fc48ea3644968bead720d71a2521a4cf15faef5dcf7db75,PodSandboxId:5cf2096e393a1bfc1f8315d3451d888d501c7de4fcfb2b1a1d55e820e9099042,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733425398081799975,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r9sk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d31a62-b4c2-4d67-801b-a8623f03af65,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:555285d2c5baa83e8a141e31cb63b1ecf7f24747793d17bc184d077aa32d32ff,PodSandboxId:8390342f3267cb0ec0781d40326f871cc206876b0335b89eb5303cce9eddc54b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733425387149945176,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03644608f10a19dcf721eb0920007288,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e16b5c2bcff1353331c9f81ed262d909e8ba9ec868b3ad9d3a34b228ba38fe53,PodSandboxId:0ea5f64294d5938655208e3040270b0bda77dee532116d1f88002dc1c1901133,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e591
3fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733425387132208911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2749c1e2468930ab4ed523429fa7366,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acbbe2cd3da913dd2fca7ca3cb2016f3e74b87dcffb1f4e7915d49104984e28e,PodSandboxId:eef6ab607009ddac22a66acc75797bc43630ff9d833bbdc333ff952afdd8d37f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530
fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733425387097760274,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce9e3111d0ed63fd4adf56f8ae1a972,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d573ee316398f9d872687b34b67057b642f060e9f6be1e2047272f413f522cc6,PodSandboxId:593ce4fc9d33b9231528e59a66954446b8bc8d05ac419e1be5f5caed8bcef141,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a491
73,State:CONTAINER_RUNNING,CreatedAt:1733425387066497445,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-396564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3feb065f68b065eae6360503f017d29d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9329e97b-e95a-4289-b27f-4b48ef24ded9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ae26c7626bfb9       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   04e08f6f6c383       hello-world-app-55bf9c44b4-z824g
	1420a7c820a07       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                         5 minutes ago       Running             nginx                     0                   ec3d843ef852d       nginx
	80305a2941c96       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   ba639c8e211c8       busybox
	a50d9c2bd4f7c       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   7 minutes ago       Running             metrics-server            0                   0a2f2f63a4115       metrics-server-84c5f94fbc-p7wrj
	29b6e3a4fc294       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                8 minutes ago       Running             amd-gpu-device-plugin     0                   dc14503528316       amd-gpu-device-plugin-xcvzc
	dbae14404a130       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        8 minutes ago       Running             storage-provisioner       0                   4e52b6a4649ac       storage-provisioner
	789a4b25d853b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        8 minutes ago       Running             coredns                   0                   dbe32b06c8a1b       coredns-7c65d6cfc9-jz7lb
	25818b4b39166       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                        8 minutes ago       Running             kube-proxy                0                   5cf2096e393a1       kube-proxy-r9sk8
	555285d2c5baa       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                        8 minutes ago       Running             kube-controller-manager   0                   8390342f3267c       kube-controller-manager-addons-396564
	e16b5c2bcff13       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        8 minutes ago       Running             etcd                      0                   0ea5f64294d59       etcd-addons-396564
	acbbe2cd3da91       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                        8 minutes ago       Running             kube-scheduler            0                   eef6ab607009d       kube-scheduler-addons-396564
	d573ee316398f       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                        8 minutes ago       Running             kube-apiserver            0                   593ce4fc9d33b       kube-apiserver-addons-396564
	
	
	==> coredns [789a4b25d853bff2e452046d0bbe30daa5dd750b4815b30aef8774af9201999b] <==
	[INFO] 10.244.0.23:45723 - 57177 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000120444s
	[INFO] 10.244.0.23:37476 - 44138 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00015095s
	[INFO] 10.244.0.23:45723 - 20805 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000060597s
	[INFO] 10.244.0.23:37476 - 21747 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000091808s
	[INFO] 10.244.0.23:45723 - 10322 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000129041s
	[INFO] 10.244.0.23:37476 - 46996 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000059299s
	[INFO] 10.244.0.23:45723 - 39859 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000196208s
	[INFO] 10.244.0.23:37476 - 18613 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00011634s
	[INFO] 10.244.0.23:45723 - 52261 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000560772s
	[INFO] 10.244.0.23:37476 - 19761 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000136699s
	[INFO] 10.244.0.23:45723 - 5535 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000235169s
	[INFO] 10.244.0.23:43387 - 15927 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000100854s
	[INFO] 10.244.0.23:43387 - 25263 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000073224s
	[INFO] 10.244.0.23:44583 - 54437 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000147668s
	[INFO] 10.244.0.23:43387 - 23952 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000045523s
	[INFO] 10.244.0.23:44583 - 65531 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000079401s
	[INFO] 10.244.0.23:44583 - 9008 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000082667s
	[INFO] 10.244.0.23:43387 - 37964 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000193195s
	[INFO] 10.244.0.23:43387 - 10418 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041802s
	[INFO] 10.244.0.23:44583 - 47994 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000088688s
	[INFO] 10.244.0.23:44583 - 15546 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000163919s
	[INFO] 10.244.0.23:43387 - 48719 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000240358s
	[INFO] 10.244.0.23:44583 - 29516 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000040519s
	[INFO] 10.244.0.23:44583 - 61997 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000060587s
	[INFO] 10.244.0.23:43387 - 15718 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000095848s
	
	
	==> describe nodes <==
	Name:               addons-396564
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-396564
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=addons-396564
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T19_03_12_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-396564
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 19:03:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-396564
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 19:11:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 19:09:21 +0000   Thu, 05 Dec 2024 19:03:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 19:09:21 +0000   Thu, 05 Dec 2024 19:03:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 19:09:21 +0000   Thu, 05 Dec 2024 19:03:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 19:09:21 +0000   Thu, 05 Dec 2024 19:03:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.9
	  Hostname:    addons-396564
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 8d7365bcf3de43d58534ecb48390e7f3
	  System UUID:                8d7365bc-f3de-43d5-8534-ecb48390e7f3
	  Boot ID:                    2e6976b8-6ce2-402d-812d-ae00122d3fd1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m4s
	  default                     hello-world-app-55bf9c44b4-z824g         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m50s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m14s
	  kube-system                 amd-gpu-device-plugin-xcvzc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m36s
	  kube-system                 coredns-7c65d6cfc9-jz7lb                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m38s
	  kube-system                 etcd-addons-396564                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m44s
	  kube-system                 kube-apiserver-addons-396564             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m44s
	  kube-system                 kube-controller-manager-addons-396564    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m43s
	  kube-system                 kube-proxy-r9sk8                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m39s
	  kube-system                 kube-scheduler-addons-396564             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m43s
	  kube-system                 metrics-server-84c5f94fbc-p7wrj          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         8m32s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m36s                  kube-proxy       
	  Normal  Starting                 8m49s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m49s (x8 over 8m49s)  kubelet          Node addons-396564 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m49s (x8 over 8m49s)  kubelet          Node addons-396564 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m49s (x7 over 8m49s)  kubelet          Node addons-396564 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m43s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m43s                  kubelet          Node addons-396564 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m43s                  kubelet          Node addons-396564 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m43s                  kubelet          Node addons-396564 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m42s                  kubelet          Node addons-396564 status is now: NodeReady
	  Normal  RegisteredNode           8m39s                  node-controller  Node addons-396564 event: Registered Node addons-396564 in Controller
	
	
	==> dmesg <==
	[  +5.293286] systemd-fstab-generator[1346]: Ignoring "noauto" option for root device
	[  +0.152245] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.040200] kauditd_printk_skb: 118 callbacks suppressed
	[  +5.128557] kauditd_printk_skb: 137 callbacks suppressed
	[  +7.941395] kauditd_printk_skb: 87 callbacks suppressed
	[Dec 5 19:04] kauditd_printk_skb: 4 callbacks suppressed
	[ +10.367507] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.686741] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.110383] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.006668] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.616997] kauditd_printk_skb: 11 callbacks suppressed
	[Dec 5 19:05] kauditd_printk_skb: 14 callbacks suppressed
	[ +36.029991] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.022704] kauditd_printk_skb: 11 callbacks suppressed
	[Dec 5 19:06] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.138369] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.088802] kauditd_printk_skb: 52 callbacks suppressed
	[  +5.180590] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.639910] kauditd_printk_skb: 59 callbacks suppressed
	[  +8.518882] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.665826] kauditd_printk_skb: 23 callbacks suppressed
	[Dec 5 19:07] kauditd_printk_skb: 6 callbacks suppressed
	[  +8.870219] kauditd_printk_skb: 7 callbacks suppressed
	[Dec 5 19:09] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.577452] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [e16b5c2bcff1353331c9f81ed262d909e8ba9ec868b3ad9d3a34b228ba38fe53] <==
	{"level":"info","ts":"2024-12-05T19:04:41.749782Z","caller":"traceutil/trace.go:171","msg":"trace[47050269] linearizableReadLoop","detail":"{readStateIndex:1168; appliedIndex:1168; }","duration":"393.219427ms","start":"2024-12-05T19:04:41.356547Z","end":"2024-12-05T19:04:41.749767Z","steps":["trace[47050269] 'read index received'  (duration: 393.211648ms)","trace[47050269] 'applied index is now lower than readState.Index'  (duration: 6.962µs)"],"step_count":2}
	{"level":"warn","ts":"2024-12-05T19:04:41.749936Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"393.379814ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.9\" ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2024-12-05T19:04:41.750009Z","caller":"traceutil/trace.go:171","msg":"trace[1127141615] range","detail":"{range_begin:/registry/masterleases/192.168.39.9; range_end:; response_count:1; response_revision:1134; }","duration":"393.459568ms","start":"2024-12-05T19:04:41.356542Z","end":"2024-12-05T19:04:41.750001Z","steps":["trace[1127141615] 'agreement among raft nodes before linearized reading'  (duration: 393.303173ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:04:41.750030Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T19:04:41.356500Z","time spent":"393.524767ms","remote":"127.0.0.1:36586","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":1,"response size":155,"request content":"key:\"/registry/masterleases/192.168.39.9\" "}
	{"level":"info","ts":"2024-12-05T19:04:41.822114Z","caller":"traceutil/trace.go:171","msg":"trace[414069872] transaction","detail":"{read_only:false; response_revision:1135; number_of_response:1; }","duration":"315.482265ms","start":"2024-12-05T19:04:41.506617Z","end":"2024-12-05T19:04:41.822100Z","steps":["trace[414069872] 'process raft request'  (duration: 311.360036ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:04:41.823203Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T19:04:41.506581Z","time spent":"316.511236ms","remote":"127.0.0.1:39538","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-k6bflfeh4ottzntjhcieubcdvm\" mod_revision:1049 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-k6bflfeh4ottzntjhcieubcdvm\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-k6bflfeh4ottzntjhcieubcdvm\" > >"}
	{"level":"warn","ts":"2024-12-05T19:04:41.824994Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"447.1684ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T19:04:41.825037Z","caller":"traceutil/trace.go:171","msg":"trace[1278542418] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1135; }","duration":"447.215587ms","start":"2024-12-05T19:04:41.377810Z","end":"2024-12-05T19:04:41.825026Z","steps":["trace[1278542418] 'agreement among raft nodes before linearized reading'  (duration: 447.14086ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:04:41.825067Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T19:04:41.377770Z","time spent":"447.290337ms","remote":"127.0.0.1:36536","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-12-05T19:04:41.825215Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.0302ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T19:04:41.825243Z","caller":"traceutil/trace.go:171","msg":"trace[406275605] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1135; }","duration":"114.061839ms","start":"2024-12-05T19:04:41.711173Z","end":"2024-12-05T19:04:41.825235Z","steps":["trace[406275605] 'agreement among raft nodes before linearized reading'  (duration: 113.999526ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:04:41.825323Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"174.381181ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T19:04:41.825347Z","caller":"traceutil/trace.go:171","msg":"trace[1268234780] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1135; }","duration":"174.408621ms","start":"2024-12-05T19:04:41.650932Z","end":"2024-12-05T19:04:41.825341Z","steps":["trace[1268234780] 'agreement among raft nodes before linearized reading'  (duration: 174.361258ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T19:05:46.973365Z","caller":"traceutil/trace.go:171","msg":"trace[1518702504] linearizableReadLoop","detail":"{readStateIndex:1311; appliedIndex:1310; }","duration":"185.554163ms","start":"2024-12-05T19:05:46.787577Z","end":"2024-12-05T19:05:46.973131Z","steps":["trace[1518702504] 'read index received'  (duration: 184.850581ms)","trace[1518702504] 'applied index is now lower than readState.Index'  (duration: 702.525µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-05T19:05:46.974023Z","caller":"traceutil/trace.go:171","msg":"trace[1898178183] transaction","detail":"{read_only:false; response_revision:1261; number_of_response:1; }","duration":"290.500968ms","start":"2024-12-05T19:05:46.683313Z","end":"2024-12-05T19:05:46.973814Z","steps":["trace[1898178183] 'process raft request'  (duration: 288.983657ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:05:49.558286Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"408.925538ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T19:05:49.558417Z","caller":"traceutil/trace.go:171","msg":"trace[2142991964] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1263; }","duration":"409.132402ms","start":"2024-12-05T19:05:49.149262Z","end":"2024-12-05T19:05:49.558394Z","steps":["trace[2142991964] 'range keys from in-memory index tree'  (duration: 408.878097ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:05:49.558483Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T19:05:49.149229Z","time spent":"409.22915ms","remote":"127.0.0.1:39460","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-12-05T19:05:49.558691Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"375.60697ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T19:05:49.558797Z","caller":"traceutil/trace.go:171","msg":"trace[2018104964] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1263; }","duration":"375.722755ms","start":"2024-12-05T19:05:49.183064Z","end":"2024-12-05T19:05:49.558787Z","steps":["trace[2018104964] 'range keys from in-memory index tree'  (duration: 375.597926ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:05:49.558991Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.840937ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T19:05:49.559042Z","caller":"traceutil/trace.go:171","msg":"trace[831807354] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1263; }","duration":"186.894772ms","start":"2024-12-05T19:05:49.372139Z","end":"2024-12-05T19:05:49.559034Z","steps":["trace[831807354] 'range keys from in-memory index tree'  (duration: 186.767015ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T19:06:16.607245Z","caller":"traceutil/trace.go:171","msg":"trace[354493794] transaction","detail":"{read_only:false; response_revision:1414; number_of_response:1; }","duration":"185.937952ms","start":"2024-12-05T19:06:16.421291Z","end":"2024-12-05T19:06:16.607229Z","steps":["trace[354493794] 'process raft request'  (duration: 185.822705ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T19:07:12.039299Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"208.132289ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T19:07:12.039462Z","caller":"traceutil/trace.go:171","msg":"trace[1689487960] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1788; }","duration":"208.356664ms","start":"2024-12-05T19:07:11.831085Z","end":"2024-12-05T19:07:12.039442Z","steps":["trace[1689487960] 'range keys from in-memory index tree'  (duration: 207.973824ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:11:56 up 9 min,  0 users,  load average: 0.23, 0.68, 0.49
	Linux addons-396564 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d573ee316398f9d872687b34b67057b642f060e9f6be1e2047272f413f522cc6] <==
	E1205 19:05:13.782449       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.188.101:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.188.101:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.188.101:443: connect: connection refused" logger="UnhandledError"
	E1205 19:05:13.788179       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.188.101:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.188.101:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.188.101:443: connect: connection refused" logger="UnhandledError"
	I1205 19:05:13.862182       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1205 19:06:01.500586       1 conn.go:339] Error on socket receive: read tcp 192.168.39.9:8443->192.168.39.1:37926: use of closed network connection
	I1205 19:06:10.947111       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.149.211"}
	I1205 19:06:35.718353       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1205 19:06:36.762903       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1205 19:06:41.250682       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1205 19:06:41.487034       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.87.93"}
	E1205 19:06:45.893129       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1205 19:06:59.827550       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1205 19:07:19.373128       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:07:19.373193       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:07:19.402014       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:07:19.402129       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:07:19.414914       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:07:19.415022       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:07:19.422133       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:07:19.422193       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:07:19.448376       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:07:19.448423       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1205 19:07:20.415440       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1205 19:07:20.449207       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1205 19:07:20.592245       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1205 19:09:06.135256       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.144.164"}
	
	
	==> kube-controller-manager [555285d2c5baa83e8a141e31cb63b1ecf7f24747793d17bc184d077aa32d32ff] <==
	I1205 19:09:21.334879       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-396564"
	W1205 19:09:38.714564       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:09:38.714654       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:09:53.777444       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:09:53.777547       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:09:54.708964       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:09:54.709115       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:09:57.066494       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:09:57.066598       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:10:23.059023       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:10:23.059141       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:10:40.022951       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:10:40.023245       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:10:47.345596       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:10:47.345877       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:10:49.706775       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:10:49.706916       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:11:12.359466       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:11:12.359532       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:11:22.327965       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:11:22.328085       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:11:22.355251       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:11:22.355350       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 19:11:25.689059       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:11:25.689203       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [25818b4b391668a26fc48ea3644968bead720d71a2521a4cf15faef5dcf7db75] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 19:03:18.941178       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1205 19:03:18.961226       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.9"]
	E1205 19:03:18.961321       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 19:03:19.050961       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 19:03:19.051016       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 19:03:19.051049       1 server_linux.go:169] "Using iptables Proxier"
	I1205 19:03:19.053864       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 19:03:19.054077       1 server.go:483] "Version info" version="v1.31.2"
	I1205 19:03:19.054106       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 19:03:19.055402       1 config.go:199] "Starting service config controller"
	I1205 19:03:19.055445       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 19:03:19.055473       1 config.go:105] "Starting endpoint slice config controller"
	I1205 19:03:19.055495       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 19:03:19.056166       1 config.go:328] "Starting node config controller"
	I1205 19:03:19.056200       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 19:03:19.155774       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 19:03:19.155867       1 shared_informer.go:320] Caches are synced for service config
	I1205 19:03:19.156356       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [acbbe2cd3da913dd2fca7ca3cb2016f3e74b87dcffb1f4e7915d49104984e28e] <==
	W1205 19:03:10.457877       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 19:03:10.458069       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:10.561384       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 19:03:10.561444       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:10.625864       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1205 19:03:10.625904       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:10.628395       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 19:03:10.628596       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1205 19:03:10.631679       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 19:03:10.631979       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:10.631881       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1205 19:03:10.632148       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:10.703172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 19:03:10.703379       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:10.744779       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 19:03:10.744910       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:10.756458       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 19:03:10.756581       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:10.760280       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1205 19:03:10.760469       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:10.858309       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 19:03:10.858452       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1205 19:03:10.867401       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 19:03:10.867542       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1205 19:03:12.778371       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 05 19:10:32 addons-396564 kubelet[1218]: E1205 19:10:32.355318    1218 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425832354825045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:10:32 addons-396564 kubelet[1218]: E1205 19:10:32.355342    1218 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425832354825045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:10:42 addons-396564 kubelet[1218]: E1205 19:10:42.363781    1218 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425842358654202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:10:42 addons-396564 kubelet[1218]: E1205 19:10:42.364154    1218 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425842358654202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:10:52 addons-396564 kubelet[1218]: E1205 19:10:52.366772    1218 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425852366322485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:10:52 addons-396564 kubelet[1218]: E1205 19:10:52.367341    1218 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425852366322485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:11:02 addons-396564 kubelet[1218]: E1205 19:11:02.369787    1218 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425862369255203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:11:02 addons-396564 kubelet[1218]: E1205 19:11:02.370253    1218 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425862369255203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:11:12 addons-396564 kubelet[1218]: E1205 19:11:12.196317    1218 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 05 19:11:12 addons-396564 kubelet[1218]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 05 19:11:12 addons-396564 kubelet[1218]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 19:11:12 addons-396564 kubelet[1218]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 19:11:12 addons-396564 kubelet[1218]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 19:11:12 addons-396564 kubelet[1218]: E1205 19:11:12.372852    1218 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425872372504756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:11:12 addons-396564 kubelet[1218]: E1205 19:11:12.372899    1218 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425872372504756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:11:22 addons-396564 kubelet[1218]: E1205 19:11:22.375892    1218 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425882374821894,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:11:22 addons-396564 kubelet[1218]: E1205 19:11:22.376814    1218 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425882374821894,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:11:27 addons-396564 kubelet[1218]: I1205 19:11:27.171271    1218 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 05 19:11:32 addons-396564 kubelet[1218]: E1205 19:11:32.383416    1218 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425892379940792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:11:32 addons-396564 kubelet[1218]: E1205 19:11:32.383997    1218 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425892379940792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:11:42 addons-396564 kubelet[1218]: E1205 19:11:42.387416    1218 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425902387119260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:11:42 addons-396564 kubelet[1218]: E1205 19:11:42.387671    1218 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425902387119260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:11:52 addons-396564 kubelet[1218]: I1205 19:11:52.172076    1218 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-xcvzc" secret="" err="secret \"gcp-auth\" not found"
	Dec 05 19:11:52 addons-396564 kubelet[1218]: E1205 19:11:52.391842    1218 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425912391446349,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:11:52 addons-396564 kubelet[1218]: E1205 19:11:52.391909    1218 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733425912391446349,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [dbae14404a1302fb8ac2eb0ca8137ed78f59059acd980e3980d42a29c87e08f9] <==
	I1205 19:03:24.754564       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 19:03:24.776398       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 19:03:24.776485       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 19:03:24.794036       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 19:03:24.794225       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-396564_929486ec-e5da-4ae8-9917-781360e96da1!
	I1205 19:03:24.795194       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"564c2f78-3944-4585-98ef-beb3cd4944d8", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-396564_929486ec-e5da-4ae8-9917-781360e96da1 became leader
	I1205 19:03:24.894417       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-396564_929486ec-e5da-4ae8-9917-781360e96da1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-396564 -n addons-396564
helpers_test.go:261: (dbg) Run:  kubectl --context addons-396564 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-396564 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (329.56s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.45s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-396564
addons_test.go:170: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-396564: exit status 82 (2m0.486572387s)

                                                
                                                
-- stdout --
	* Stopping node "addons-396564"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:172: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-396564" : exit status 82
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-396564
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-396564: exit status 11 (21.672422961s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.9:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-396564" : exit status 11
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-396564
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-396564: exit status 11 (6.144100854s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.9:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-396564" : exit status 11
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-396564
addons_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-396564: exit status 11 (6.14377025s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.9:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:185: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-396564" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 node stop m02 -v=7 --alsologtostderr
E1205 19:23:55.988455  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:24:36.950531  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:25:51.381357  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-106302 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.498296403s)

                                                
                                                
-- stdout --
	* Stopping node "ha-106302-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 19:23:52.803286  553151 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:23:52.803455  553151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:23:52.803466  553151 out.go:358] Setting ErrFile to fd 2...
	I1205 19:23:52.803473  553151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:23:52.803745  553151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 19:23:52.804145  553151 mustload.go:65] Loading cluster: ha-106302
	I1205 19:23:52.804786  553151 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:23:52.804816  553151 stop.go:39] StopHost: ha-106302-m02
	I1205 19:23:52.805293  553151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:23:52.805353  553151 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:23:52.821909  553151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39927
	I1205 19:23:52.822512  553151 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:23:52.823207  553151 main.go:141] libmachine: Using API Version  1
	I1205 19:23:52.823232  553151 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:23:52.823679  553151 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:23:52.825986  553151 out.go:177] * Stopping node "ha-106302-m02"  ...
	I1205 19:23:52.827848  553151 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1205 19:23:52.827931  553151 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:23:52.828244  553151 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1205 19:23:52.828356  553151 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:23:52.831853  553151 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:23:52.832323  553151 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:23:52.832344  553151 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:23:52.832530  553151 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:23:52.832719  553151 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:23:52.832861  553151 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:23:52.832998  553151 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa Username:docker}
	I1205 19:23:52.924745  553151 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1205 19:23:52.981445  553151 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1205 19:23:53.039207  553151 main.go:141] libmachine: Stopping "ha-106302-m02"...
	I1205 19:23:53.039244  553151 main.go:141] libmachine: (ha-106302-m02) Calling .GetState
	I1205 19:23:53.041097  553151 main.go:141] libmachine: (ha-106302-m02) Calling .Stop
	I1205 19:23:53.045085  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 0/120
	I1205 19:23:54.047006  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 1/120
	I1205 19:23:55.048324  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 2/120
	I1205 19:23:56.049618  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 3/120
	I1205 19:23:57.051063  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 4/120
	I1205 19:23:58.052935  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 5/120
	I1205 19:23:59.054476  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 6/120
	I1205 19:24:00.055957  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 7/120
	I1205 19:24:01.057466  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 8/120
	I1205 19:24:02.058814  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 9/120
	I1205 19:24:03.060820  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 10/120
	I1205 19:24:04.062924  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 11/120
	I1205 19:24:05.064338  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 12/120
	I1205 19:24:06.065794  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 13/120
	I1205 19:24:07.067967  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 14/120
	I1205 19:24:08.070153  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 15/120
	I1205 19:24:09.071381  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 16/120
	I1205 19:24:10.073001  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 17/120
	I1205 19:24:11.075204  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 18/120
	I1205 19:24:12.076758  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 19/120
	I1205 19:24:13.078353  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 20/120
	I1205 19:24:14.079887  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 21/120
	I1205 19:24:15.081221  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 22/120
	I1205 19:24:16.082680  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 23/120
	I1205 19:24:17.084394  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 24/120
	I1205 19:24:18.085742  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 25/120
	I1205 19:24:19.086952  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 26/120
	I1205 19:24:20.088404  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 27/120
	I1205 19:24:21.089562  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 28/120
	I1205 19:24:22.091014  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 29/120
	I1205 19:24:23.093241  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 30/120
	I1205 19:24:24.094853  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 31/120
	I1205 19:24:25.096188  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 32/120
	I1205 19:24:26.097516  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 33/120
	I1205 19:24:27.098869  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 34/120
	I1205 19:24:28.100302  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 35/120
	I1205 19:24:29.101928  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 36/120
	I1205 19:24:30.103637  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 37/120
	I1205 19:24:31.105909  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 38/120
	I1205 19:24:32.107303  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 39/120
	I1205 19:24:33.109851  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 40/120
	I1205 19:24:34.111696  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 41/120
	I1205 19:24:35.113335  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 42/120
	I1205 19:24:36.114982  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 43/120
	I1205 19:24:37.116880  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 44/120
	I1205 19:24:38.118820  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 45/120
	I1205 19:24:39.120435  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 46/120
	I1205 19:24:40.121812  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 47/120
	I1205 19:24:41.123370  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 48/120
	I1205 19:24:42.124810  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 49/120
	I1205 19:24:43.127203  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 50/120
	I1205 19:24:44.128520  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 51/120
	I1205 19:24:45.130985  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 52/120
	I1205 19:24:46.132604  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 53/120
	I1205 19:24:47.134097  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 54/120
	I1205 19:24:48.135377  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 55/120
	I1205 19:24:49.136817  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 56/120
	I1205 19:24:50.138338  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 57/120
	I1205 19:24:51.140082  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 58/120
	I1205 19:24:52.141810  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 59/120
	I1205 19:24:53.144018  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 60/120
	I1205 19:24:54.145249  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 61/120
	I1205 19:24:55.146606  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 62/120
	I1205 19:24:56.147970  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 63/120
	I1205 19:24:57.149456  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 64/120
	I1205 19:24:58.151600  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 65/120
	I1205 19:24:59.153073  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 66/120
	I1205 19:25:00.154770  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 67/120
	I1205 19:25:01.155995  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 68/120
	I1205 19:25:02.157440  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 69/120
	I1205 19:25:03.158876  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 70/120
	I1205 19:25:04.160353  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 71/120
	I1205 19:25:05.161822  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 72/120
	I1205 19:25:06.163288  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 73/120
	I1205 19:25:07.164611  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 74/120
	I1205 19:25:08.166638  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 75/120
	I1205 19:25:09.168112  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 76/120
	I1205 19:25:10.169474  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 77/120
	I1205 19:25:11.171234  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 78/120
	I1205 19:25:12.172702  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 79/120
	I1205 19:25:13.174638  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 80/120
	I1205 19:25:14.176021  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 81/120
	I1205 19:25:15.177275  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 82/120
	I1205 19:25:16.178738  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 83/120
	I1205 19:25:17.180010  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 84/120
	I1205 19:25:18.182124  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 85/120
	I1205 19:25:19.184082  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 86/120
	I1205 19:25:20.185763  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 87/120
	I1205 19:25:21.187026  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 88/120
	I1205 19:25:22.188773  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 89/120
	I1205 19:25:23.191138  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 90/120
	I1205 19:25:24.192791  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 91/120
	I1205 19:25:25.194788  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 92/120
	I1205 19:25:26.196299  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 93/120
	I1205 19:25:27.198471  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 94/120
	I1205 19:25:28.200584  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 95/120
	I1205 19:25:29.202920  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 96/120
	I1205 19:25:30.204572  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 97/120
	I1205 19:25:31.206933  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 98/120
	I1205 19:25:32.208638  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 99/120
	I1205 19:25:33.211008  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 100/120
	I1205 19:25:34.212810  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 101/120
	I1205 19:25:35.214751  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 102/120
	I1205 19:25:36.216063  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 103/120
	I1205 19:25:37.217814  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 104/120
	I1205 19:25:38.219780  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 105/120
	I1205 19:25:39.221674  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 106/120
	I1205 19:25:40.223178  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 107/120
	I1205 19:25:41.224621  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 108/120
	I1205 19:25:42.226139  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 109/120
	I1205 19:25:43.228200  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 110/120
	I1205 19:25:44.230220  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 111/120
	I1205 19:25:45.232534  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 112/120
	I1205 19:25:46.235111  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 113/120
	I1205 19:25:47.236717  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 114/120
	I1205 19:25:48.239019  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 115/120
	I1205 19:25:49.240684  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 116/120
	I1205 19:25:50.243230  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 117/120
	I1205 19:25:51.244765  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 118/120
	I1205 19:25:52.246387  553151 main.go:141] libmachine: (ha-106302-m02) Waiting for machine to stop 119/120
	I1205 19:25:53.247468  553151 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1205 19:25:53.247634  553151 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-106302 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 status -v=7 --alsologtostderr
E1205 19:25:58.872437  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:371: (dbg) Done: out/minikube-linux-amd64 -p ha-106302 status -v=7 --alsologtostderr: (18.892867109s)
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-106302 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-106302 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-106302 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-106302 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-106302 -n ha-106302
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-106302 logs -n 25: (1.493356038s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                      |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-106302 cp ha-106302-m03:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile42720673/001/cp-test_ha-106302-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m03:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302:/home/docker/cp-test_ha-106302-m03_ha-106302.txt                     |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302 sudo cat                                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m03_ha-106302.txt                               |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m03:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m02:/home/docker/cp-test_ha-106302-m03_ha-106302-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302-m02 sudo cat                                        | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m03_ha-106302-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m03:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04:/home/docker/cp-test_ha-106302-m03_ha-106302-m04.txt             |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302-m04 sudo cat                                        | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m03_ha-106302-m04.txt                           |           |         |         |                     |                     |
	| cp      | ha-106302 cp testdata/cp-test.txt                                              | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04:/home/docker/cp-test.txt                                         |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile42720673/001/cp-test_ha-106302-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302:/home/docker/cp-test_ha-106302-m04_ha-106302.txt                     |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302 sudo cat                                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m04_ha-106302.txt                               |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m02:/home/docker/cp-test_ha-106302-m04_ha-106302-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302-m02 sudo cat                                        | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m04_ha-106302-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m03:/home/docker/cp-test_ha-106302-m04_ha-106302-m03.txt             |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302-m03 sudo cat                                        | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m04_ha-106302-m03.txt                           |           |         |         |                     |                     |
	| node    | ha-106302 node stop m02 -v=7                                                   | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 19:19:05
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:19:05.666020  549077 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:19:05.666172  549077 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:19:05.666182  549077 out.go:358] Setting ErrFile to fd 2...
	I1205 19:19:05.666187  549077 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:19:05.666372  549077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 19:19:05.666982  549077 out.go:352] Setting JSON to false
	I1205 19:19:05.667993  549077 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":7292,"bootTime":1733419054,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:19:05.668118  549077 start.go:139] virtualization: kvm guest
	I1205 19:19:05.670258  549077 out.go:177] * [ha-106302] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:19:05.672244  549077 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 19:19:05.672310  549077 notify.go:220] Checking for updates...
	I1205 19:19:05.674836  549077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:19:05.676311  549077 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:19:05.677586  549077 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:19:05.678906  549077 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:19:05.680179  549077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:19:05.681501  549077 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 19:19:05.716520  549077 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 19:19:05.718361  549077 start.go:297] selected driver: kvm2
	I1205 19:19:05.718375  549077 start.go:901] validating driver "kvm2" against <nil>
	I1205 19:19:05.718387  549077 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:19:05.719138  549077 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:19:05.719217  549077 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20052-530897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 19:19:05.734721  549077 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 19:19:05.734777  549077 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 19:19:05.735145  549077 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:19:05.735198  549077 cni.go:84] Creating CNI manager for ""
	I1205 19:19:05.735258  549077 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1205 19:19:05.735271  549077 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 19:19:05.735352  549077 start.go:340] cluster config:
	{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1205 19:19:05.735498  549077 iso.go:125] acquiring lock: {Name:mk778929df466edaca8cb6d38427acedfae32b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:19:05.737389  549077 out.go:177] * Starting "ha-106302" primary control-plane node in "ha-106302" cluster
	I1205 19:19:05.738520  549077 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:19:05.738565  549077 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 19:19:05.738579  549077 cache.go:56] Caching tarball of preloaded images
	I1205 19:19:05.738663  549077 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 19:19:05.738678  549077 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 19:19:05.739034  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:19:05.739058  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json: {Name:mk36f887968924e3b867abb3b152df7882583b36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:05.739210  549077 start.go:360] acquireMachinesLock for ha-106302: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 19:19:05.739241  549077 start.go:364] duration metric: took 16.973µs to acquireMachinesLock for "ha-106302"
	I1205 19:19:05.739258  549077 start.go:93] Provisioning new machine with config: &{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:19:05.739311  549077 start.go:125] createHost starting for "" (driver="kvm2")
	I1205 19:19:05.740876  549077 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 19:19:05.741018  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:19:05.741056  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:19:05.755320  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35555
	I1205 19:19:05.755768  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:19:05.756364  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:19:05.756386  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:19:05.756720  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:19:05.756918  549077 main.go:141] libmachine: (ha-106302) Calling .GetMachineName
	I1205 19:19:05.757058  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:05.757247  549077 start.go:159] libmachine.API.Create for "ha-106302" (driver="kvm2")
	I1205 19:19:05.757287  549077 client.go:168] LocalClient.Create starting
	I1205 19:19:05.757338  549077 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem
	I1205 19:19:05.757377  549077 main.go:141] libmachine: Decoding PEM data...
	I1205 19:19:05.757396  549077 main.go:141] libmachine: Parsing certificate...
	I1205 19:19:05.757476  549077 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem
	I1205 19:19:05.757503  549077 main.go:141] libmachine: Decoding PEM data...
	I1205 19:19:05.757522  549077 main.go:141] libmachine: Parsing certificate...
	I1205 19:19:05.757549  549077 main.go:141] libmachine: Running pre-create checks...
	I1205 19:19:05.757567  549077 main.go:141] libmachine: (ha-106302) Calling .PreCreateCheck
	I1205 19:19:05.757886  549077 main.go:141] libmachine: (ha-106302) Calling .GetConfigRaw
	I1205 19:19:05.758310  549077 main.go:141] libmachine: Creating machine...
	I1205 19:19:05.758325  549077 main.go:141] libmachine: (ha-106302) Calling .Create
	I1205 19:19:05.758443  549077 main.go:141] libmachine: (ha-106302) Creating KVM machine...
	I1205 19:19:05.759563  549077 main.go:141] libmachine: (ha-106302) DBG | found existing default KVM network
	I1205 19:19:05.760292  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:05.760130  549100 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231f0}
	I1205 19:19:05.760373  549077 main.go:141] libmachine: (ha-106302) DBG | created network xml: 
	I1205 19:19:05.760394  549077 main.go:141] libmachine: (ha-106302) DBG | <network>
	I1205 19:19:05.760405  549077 main.go:141] libmachine: (ha-106302) DBG |   <name>mk-ha-106302</name>
	I1205 19:19:05.760417  549077 main.go:141] libmachine: (ha-106302) DBG |   <dns enable='no'/>
	I1205 19:19:05.760428  549077 main.go:141] libmachine: (ha-106302) DBG |   
	I1205 19:19:05.760437  549077 main.go:141] libmachine: (ha-106302) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1205 19:19:05.760450  549077 main.go:141] libmachine: (ha-106302) DBG |     <dhcp>
	I1205 19:19:05.760460  549077 main.go:141] libmachine: (ha-106302) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1205 19:19:05.760472  549077 main.go:141] libmachine: (ha-106302) DBG |     </dhcp>
	I1205 19:19:05.760488  549077 main.go:141] libmachine: (ha-106302) DBG |   </ip>
	I1205 19:19:05.760499  549077 main.go:141] libmachine: (ha-106302) DBG |   
	I1205 19:19:05.760507  549077 main.go:141] libmachine: (ha-106302) DBG | </network>
	I1205 19:19:05.760517  549077 main.go:141] libmachine: (ha-106302) DBG | 
	I1205 19:19:05.765547  549077 main.go:141] libmachine: (ha-106302) DBG | trying to create private KVM network mk-ha-106302 192.168.39.0/24...
	I1205 19:19:05.832912  549077 main.go:141] libmachine: (ha-106302) DBG | private KVM network mk-ha-106302 192.168.39.0/24 created
	I1205 19:19:05.832950  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:05.832854  549100 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:19:05.832976  549077 main.go:141] libmachine: (ha-106302) Setting up store path in /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302 ...
	I1205 19:19:05.832995  549077 main.go:141] libmachine: (ha-106302) Building disk image from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 19:19:05.833015  549077 main.go:141] libmachine: (ha-106302) Downloading /home/jenkins/minikube-integration/20052-530897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 19:19:06.116114  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:06.115928  549100 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa...
	I1205 19:19:06.195132  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:06.194945  549100 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/ha-106302.rawdisk...
	I1205 19:19:06.195166  549077 main.go:141] libmachine: (ha-106302) DBG | Writing magic tar header
	I1205 19:19:06.195176  549077 main.go:141] libmachine: (ha-106302) DBG | Writing SSH key tar header
	I1205 19:19:06.195183  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:06.195098  549100 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302 ...
	I1205 19:19:06.195194  549077 main.go:141] libmachine: (ha-106302) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302
	I1205 19:19:06.195272  549077 main.go:141] libmachine: (ha-106302) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302 (perms=drwx------)
	I1205 19:19:06.195294  549077 main.go:141] libmachine: (ha-106302) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines (perms=drwxr-xr-x)
	I1205 19:19:06.195305  549077 main.go:141] libmachine: (ha-106302) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines
	I1205 19:19:06.195321  549077 main.go:141] libmachine: (ha-106302) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:19:06.195332  549077 main.go:141] libmachine: (ha-106302) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube (perms=drwxr-xr-x)
	I1205 19:19:06.195340  549077 main.go:141] libmachine: (ha-106302) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897
	I1205 19:19:06.195349  549077 main.go:141] libmachine: (ha-106302) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 19:19:06.195354  549077 main.go:141] libmachine: (ha-106302) DBG | Checking permissions on dir: /home/jenkins
	I1205 19:19:06.195360  549077 main.go:141] libmachine: (ha-106302) DBG | Checking permissions on dir: /home
	I1205 19:19:06.195379  549077 main.go:141] libmachine: (ha-106302) DBG | Skipping /home - not owner
	I1205 19:19:06.195390  549077 main.go:141] libmachine: (ha-106302) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897 (perms=drwxrwxr-x)
	I1205 19:19:06.195397  549077 main.go:141] libmachine: (ha-106302) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 19:19:06.195403  549077 main.go:141] libmachine: (ha-106302) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 19:19:06.195409  549077 main.go:141] libmachine: (ha-106302) Creating domain...
	I1205 19:19:06.196529  549077 main.go:141] libmachine: (ha-106302) define libvirt domain using xml: 
	I1205 19:19:06.196544  549077 main.go:141] libmachine: (ha-106302) <domain type='kvm'>
	I1205 19:19:06.196550  549077 main.go:141] libmachine: (ha-106302)   <name>ha-106302</name>
	I1205 19:19:06.196561  549077 main.go:141] libmachine: (ha-106302)   <memory unit='MiB'>2200</memory>
	I1205 19:19:06.196569  549077 main.go:141] libmachine: (ha-106302)   <vcpu>2</vcpu>
	I1205 19:19:06.196578  549077 main.go:141] libmachine: (ha-106302)   <features>
	I1205 19:19:06.196586  549077 main.go:141] libmachine: (ha-106302)     <acpi/>
	I1205 19:19:06.196595  549077 main.go:141] libmachine: (ha-106302)     <apic/>
	I1205 19:19:06.196603  549077 main.go:141] libmachine: (ha-106302)     <pae/>
	I1205 19:19:06.196621  549077 main.go:141] libmachine: (ha-106302)     
	I1205 19:19:06.196632  549077 main.go:141] libmachine: (ha-106302)   </features>
	I1205 19:19:06.196643  549077 main.go:141] libmachine: (ha-106302)   <cpu mode='host-passthrough'>
	I1205 19:19:06.196652  549077 main.go:141] libmachine: (ha-106302)   
	I1205 19:19:06.196658  549077 main.go:141] libmachine: (ha-106302)   </cpu>
	I1205 19:19:06.196670  549077 main.go:141] libmachine: (ha-106302)   <os>
	I1205 19:19:06.196677  549077 main.go:141] libmachine: (ha-106302)     <type>hvm</type>
	I1205 19:19:06.196689  549077 main.go:141] libmachine: (ha-106302)     <boot dev='cdrom'/>
	I1205 19:19:06.196704  549077 main.go:141] libmachine: (ha-106302)     <boot dev='hd'/>
	I1205 19:19:06.196715  549077 main.go:141] libmachine: (ha-106302)     <bootmenu enable='no'/>
	I1205 19:19:06.196724  549077 main.go:141] libmachine: (ha-106302)   </os>
	I1205 19:19:06.196732  549077 main.go:141] libmachine: (ha-106302)   <devices>
	I1205 19:19:06.196743  549077 main.go:141] libmachine: (ha-106302)     <disk type='file' device='cdrom'>
	I1205 19:19:06.196758  549077 main.go:141] libmachine: (ha-106302)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/boot2docker.iso'/>
	I1205 19:19:06.196769  549077 main.go:141] libmachine: (ha-106302)       <target dev='hdc' bus='scsi'/>
	I1205 19:19:06.196777  549077 main.go:141] libmachine: (ha-106302)       <readonly/>
	I1205 19:19:06.196783  549077 main.go:141] libmachine: (ha-106302)     </disk>
	I1205 19:19:06.196795  549077 main.go:141] libmachine: (ha-106302)     <disk type='file' device='disk'>
	I1205 19:19:06.196806  549077 main.go:141] libmachine: (ha-106302)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 19:19:06.196821  549077 main.go:141] libmachine: (ha-106302)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/ha-106302.rawdisk'/>
	I1205 19:19:06.196833  549077 main.go:141] libmachine: (ha-106302)       <target dev='hda' bus='virtio'/>
	I1205 19:19:06.196842  549077 main.go:141] libmachine: (ha-106302)     </disk>
	I1205 19:19:06.196851  549077 main.go:141] libmachine: (ha-106302)     <interface type='network'>
	I1205 19:19:06.196861  549077 main.go:141] libmachine: (ha-106302)       <source network='mk-ha-106302'/>
	I1205 19:19:06.196873  549077 main.go:141] libmachine: (ha-106302)       <model type='virtio'/>
	I1205 19:19:06.196896  549077 main.go:141] libmachine: (ha-106302)     </interface>
	I1205 19:19:06.196909  549077 main.go:141] libmachine: (ha-106302)     <interface type='network'>
	I1205 19:19:06.196919  549077 main.go:141] libmachine: (ha-106302)       <source network='default'/>
	I1205 19:19:06.196927  549077 main.go:141] libmachine: (ha-106302)       <model type='virtio'/>
	I1205 19:19:06.196936  549077 main.go:141] libmachine: (ha-106302)     </interface>
	I1205 19:19:06.196944  549077 main.go:141] libmachine: (ha-106302)     <serial type='pty'>
	I1205 19:19:06.196953  549077 main.go:141] libmachine: (ha-106302)       <target port='0'/>
	I1205 19:19:06.196962  549077 main.go:141] libmachine: (ha-106302)     </serial>
	I1205 19:19:06.196975  549077 main.go:141] libmachine: (ha-106302)     <console type='pty'>
	I1205 19:19:06.196984  549077 main.go:141] libmachine: (ha-106302)       <target type='serial' port='0'/>
	I1205 19:19:06.196996  549077 main.go:141] libmachine: (ha-106302)     </console>
	I1205 19:19:06.197007  549077 main.go:141] libmachine: (ha-106302)     <rng model='virtio'>
	I1205 19:19:06.197017  549077 main.go:141] libmachine: (ha-106302)       <backend model='random'>/dev/random</backend>
	I1205 19:19:06.197028  549077 main.go:141] libmachine: (ha-106302)     </rng>
	I1205 19:19:06.197036  549077 main.go:141] libmachine: (ha-106302)     
	I1205 19:19:06.197055  549077 main.go:141] libmachine: (ha-106302)     
	I1205 19:19:06.197068  549077 main.go:141] libmachine: (ha-106302)   </devices>
	I1205 19:19:06.197073  549077 main.go:141] libmachine: (ha-106302) </domain>
	I1205 19:19:06.197078  549077 main.go:141] libmachine: (ha-106302) 
	I1205 19:19:06.202279  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:71:9c:4d in network default
	I1205 19:19:06.203034  549077 main.go:141] libmachine: (ha-106302) Ensuring networks are active...
	I1205 19:19:06.203055  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:06.203739  549077 main.go:141] libmachine: (ha-106302) Ensuring network default is active
	I1205 19:19:06.204123  549077 main.go:141] libmachine: (ha-106302) Ensuring network mk-ha-106302 is active
	I1205 19:19:06.204705  549077 main.go:141] libmachine: (ha-106302) Getting domain xml...
	I1205 19:19:06.205494  549077 main.go:141] libmachine: (ha-106302) Creating domain...
	I1205 19:19:07.414905  549077 main.go:141] libmachine: (ha-106302) Waiting to get IP...
	I1205 19:19:07.415701  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:07.416131  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:07.416172  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:07.416110  549100 retry.go:31] will retry after 254.984492ms: waiting for machine to come up
	I1205 19:19:07.672644  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:07.673096  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:07.673126  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:07.673025  549100 retry.go:31] will retry after 337.308268ms: waiting for machine to come up
	I1205 19:19:08.011677  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:08.012131  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:08.012153  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:08.012097  549100 retry.go:31] will retry after 331.381496ms: waiting for machine to come up
	I1205 19:19:08.344830  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:08.345286  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:08.345315  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:08.345230  549100 retry.go:31] will retry after 526.921251ms: waiting for machine to come up
	I1205 19:19:08.874020  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:08.874426  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:08.874457  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:08.874366  549100 retry.go:31] will retry after 677.76743ms: waiting for machine to come up
	I1205 19:19:09.554490  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:09.555045  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:09.555078  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:09.554953  549100 retry.go:31] will retry after 810.208397ms: waiting for machine to come up
	I1205 19:19:10.367000  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:10.367429  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:10.367463  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:10.367397  549100 retry.go:31] will retry after 1.115748222s: waiting for machine to come up
	I1205 19:19:11.484531  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:11.485067  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:11.485098  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:11.485008  549100 retry.go:31] will retry after 1.3235703s: waiting for machine to come up
	I1205 19:19:12.810602  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:12.810991  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:12.811014  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:12.810945  549100 retry.go:31] will retry after 1.831554324s: waiting for machine to come up
	I1205 19:19:14.645035  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:14.645488  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:14.645513  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:14.645439  549100 retry.go:31] will retry after 1.712987373s: waiting for machine to come up
	I1205 19:19:16.360441  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:16.361053  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:16.361095  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:16.360964  549100 retry.go:31] will retry after 1.757836043s: waiting for machine to come up
	I1205 19:19:18.120905  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:18.121462  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:18.121490  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:18.121398  549100 retry.go:31] will retry after 2.555295546s: waiting for machine to come up
	I1205 19:19:20.680255  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:20.680831  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:20.680857  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:20.680783  549100 retry.go:31] will retry after 3.433196303s: waiting for machine to come up
	I1205 19:19:24.117782  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:24.118200  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:24.118225  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:24.118165  549100 retry.go:31] will retry after 5.333530854s: waiting for machine to come up
	I1205 19:19:29.456371  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.456820  549077 main.go:141] libmachine: (ha-106302) Found IP for machine: 192.168.39.185
	I1205 19:19:29.456837  549077 main.go:141] libmachine: (ha-106302) Reserving static IP address...
	I1205 19:19:29.456845  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has current primary IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.457259  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find host DHCP lease matching {name: "ha-106302", mac: "52:54:00:3b:e4:76", ip: "192.168.39.185"} in network mk-ha-106302
	I1205 19:19:29.532847  549077 main.go:141] libmachine: (ha-106302) DBG | Getting to WaitForSSH function...
	I1205 19:19:29.532882  549077 main.go:141] libmachine: (ha-106302) Reserved static IP address: 192.168.39.185
	I1205 19:19:29.532895  549077 main.go:141] libmachine: (ha-106302) Waiting for SSH to be available...
	I1205 19:19:29.535405  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.536081  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:29.536388  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.536771  549077 main.go:141] libmachine: (ha-106302) DBG | Using SSH client type: external
	I1205 19:19:29.536915  549077 main.go:141] libmachine: (ha-106302) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa (-rw-------)
	I1205 19:19:29.536944  549077 main.go:141] libmachine: (ha-106302) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.185 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 19:19:29.536962  549077 main.go:141] libmachine: (ha-106302) DBG | About to run SSH command:
	I1205 19:19:29.536972  549077 main.go:141] libmachine: (ha-106302) DBG | exit 0
	I1205 19:19:29.664869  549077 main.go:141] libmachine: (ha-106302) DBG | SSH cmd err, output: <nil>: 
	I1205 19:19:29.665141  549077 main.go:141] libmachine: (ha-106302) KVM machine creation complete!
	I1205 19:19:29.665477  549077 main.go:141] libmachine: (ha-106302) Calling .GetConfigRaw
	I1205 19:19:29.666068  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:29.666255  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:29.666420  549077 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 19:19:29.666438  549077 main.go:141] libmachine: (ha-106302) Calling .GetState
	I1205 19:19:29.667703  549077 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 19:19:29.667716  549077 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 19:19:29.667721  549077 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 19:19:29.667726  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:29.669895  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.670221  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:29.670248  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.670353  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:29.670530  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:29.670706  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:29.670840  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:29.671003  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:19:29.671220  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:19:29.671232  549077 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 19:19:29.779777  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:19:29.779805  549077 main.go:141] libmachine: Detecting the provisioner...
	I1205 19:19:29.779833  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:29.782799  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.783132  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:29.783166  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.783331  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:29.783547  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:29.783683  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:29.783825  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:29.783999  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:19:29.784181  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:19:29.784191  549077 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 19:19:29.893268  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 19:19:29.893371  549077 main.go:141] libmachine: found compatible host: buildroot
	I1205 19:19:29.893381  549077 main.go:141] libmachine: Provisioning with buildroot...
	I1205 19:19:29.893390  549077 main.go:141] libmachine: (ha-106302) Calling .GetMachineName
	I1205 19:19:29.893630  549077 buildroot.go:166] provisioning hostname "ha-106302"
	I1205 19:19:29.893659  549077 main.go:141] libmachine: (ha-106302) Calling .GetMachineName
	I1205 19:19:29.893862  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:29.896175  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.896531  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:29.896559  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.896683  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:29.896874  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:29.897035  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:29.897188  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:29.897357  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:19:29.897522  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:19:29.897537  549077 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-106302 && echo "ha-106302" | sudo tee /etc/hostname
	I1205 19:19:30.019869  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-106302
	
	I1205 19:19:30.019903  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.022773  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.023137  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.023166  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.023330  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:30.023501  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.023684  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.023794  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:30.023973  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:19:30.024192  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:19:30.024213  549077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-106302' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-106302/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-106302' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:19:30.142377  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:19:30.142414  549077 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 19:19:30.142464  549077 buildroot.go:174] setting up certificates
	I1205 19:19:30.142480  549077 provision.go:84] configureAuth start
	I1205 19:19:30.142498  549077 main.go:141] libmachine: (ha-106302) Calling .GetMachineName
	I1205 19:19:30.142814  549077 main.go:141] libmachine: (ha-106302) Calling .GetIP
	I1205 19:19:30.145608  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.145944  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.145976  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.146132  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.148289  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.148544  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.148570  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.148679  549077 provision.go:143] copyHostCerts
	I1205 19:19:30.148727  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:19:30.148761  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 19:19:30.148778  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:19:30.148862  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 19:19:30.148936  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:19:30.148954  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 19:19:30.148960  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:19:30.148984  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 19:19:30.149037  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:19:30.149054  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 19:19:30.149058  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:19:30.149079  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 19:19:30.149123  549077 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.ha-106302 san=[127.0.0.1 192.168.39.185 ha-106302 localhost minikube]
	I1205 19:19:30.203242  549077 provision.go:177] copyRemoteCerts
	I1205 19:19:30.203307  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:19:30.203333  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.206290  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.206588  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.206621  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.206770  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:30.206956  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.207107  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:30.207262  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:19:30.291637  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 19:19:30.291726  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 19:19:30.316534  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 19:19:30.316648  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1205 19:19:30.340941  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 19:19:30.341027  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 19:19:30.365151  549077 provision.go:87] duration metric: took 222.64958ms to configureAuth
	I1205 19:19:30.365205  549077 buildroot.go:189] setting minikube options for container-runtime
	I1205 19:19:30.365380  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:19:30.365454  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.367820  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.368297  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.368331  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.368517  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:30.368750  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.368925  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.369063  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:30.369263  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:19:30.369448  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:19:30.369470  549077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:19:30.602742  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:19:30.602781  549077 main.go:141] libmachine: Checking connection to Docker...
	I1205 19:19:30.602812  549077 main.go:141] libmachine: (ha-106302) Calling .GetURL
	I1205 19:19:30.604203  549077 main.go:141] libmachine: (ha-106302) DBG | Using libvirt version 6000000
	I1205 19:19:30.606408  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.606761  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.606783  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.606936  549077 main.go:141] libmachine: Docker is up and running!
	I1205 19:19:30.606953  549077 main.go:141] libmachine: Reticulating splines...
	I1205 19:19:30.606980  549077 client.go:171] duration metric: took 24.849681626s to LocalClient.Create
	I1205 19:19:30.607004  549077 start.go:167] duration metric: took 24.849757772s to libmachine.API.Create "ha-106302"
	I1205 19:19:30.607018  549077 start.go:293] postStartSetup for "ha-106302" (driver="kvm2")
	I1205 19:19:30.607027  549077 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:19:30.607063  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:30.607325  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:19:30.607353  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.609392  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.609687  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.609717  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.609857  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:30.610024  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.610186  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:30.610314  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:19:30.696960  549077 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:19:30.708057  549077 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 19:19:30.708089  549077 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 19:19:30.708159  549077 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 19:19:30.708255  549077 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 19:19:30.708293  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /etc/ssl/certs/5381862.pem
	I1205 19:19:30.708421  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 19:19:30.723671  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:19:30.750926  549077 start.go:296] duration metric: took 143.887881ms for postStartSetup
	I1205 19:19:30.750995  549077 main.go:141] libmachine: (ha-106302) Calling .GetConfigRaw
	I1205 19:19:30.751793  549077 main.go:141] libmachine: (ha-106302) Calling .GetIP
	I1205 19:19:30.754292  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.754719  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.754767  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.755073  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:19:30.755274  549077 start.go:128] duration metric: took 25.015949989s to createHost
	I1205 19:19:30.755307  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.757830  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.758211  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.758247  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.758373  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:30.758576  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.758728  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.758849  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:30.759003  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:19:30.759199  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:19:30.759225  549077 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 19:19:30.869236  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733426370.835143064
	
	I1205 19:19:30.869266  549077 fix.go:216] guest clock: 1733426370.835143064
	I1205 19:19:30.869276  549077 fix.go:229] Guest: 2024-12-05 19:19:30.835143064 +0000 UTC Remote: 2024-12-05 19:19:30.755292155 +0000 UTC m=+25.129028552 (delta=79.850909ms)
	I1205 19:19:30.869342  549077 fix.go:200] guest clock delta is within tolerance: 79.850909ms
	I1205 19:19:30.869354  549077 start.go:83] releasing machines lock for "ha-106302", held for 25.130102669s
	I1205 19:19:30.869396  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:30.869701  549077 main.go:141] libmachine: (ha-106302) Calling .GetIP
	I1205 19:19:30.872169  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.872505  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.872550  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.872651  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:30.873195  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:30.873371  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:30.873461  549077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:19:30.873500  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.873622  549077 ssh_runner.go:195] Run: cat /version.json
	I1205 19:19:30.873648  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.876112  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.876348  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.876515  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.876544  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.876694  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:30.876787  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.876829  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.876854  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.876974  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:30.877063  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:30.877155  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.877225  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:19:30.877286  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:30.877416  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:19:30.978260  549077 ssh_runner.go:195] Run: systemctl --version
	I1205 19:19:30.984523  549077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:19:31.144577  549077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 19:19:31.150862  549077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 19:19:31.150921  549077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:19:31.168518  549077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 19:19:31.168546  549077 start.go:495] detecting cgroup driver to use...
	I1205 19:19:31.168607  549077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:19:31.184398  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:19:31.198391  549077 docker.go:217] disabling cri-docker service (if available) ...
	I1205 19:19:31.198459  549077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:19:31.212374  549077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:19:31.227092  549077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:19:31.345190  549077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:19:31.498651  549077 docker.go:233] disabling docker service ...
	I1205 19:19:31.498756  549077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:19:31.514013  549077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:19:31.527698  549077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:19:31.668291  549077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:19:31.787293  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:19:31.802121  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:19:31.821416  549077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 19:19:31.821488  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:19:31.831922  549077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:19:31.832002  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:19:31.842263  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:19:31.852580  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:19:31.863167  549077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:19:31.873525  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:19:31.883966  549077 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:19:31.901444  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:19:31.913185  549077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:19:31.922739  549077 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 19:19:31.922847  549077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 19:19:31.935394  549077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:19:31.944801  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:19:32.062619  549077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:19:32.155496  549077 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:19:32.155575  549077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:19:32.161325  549077 start.go:563] Will wait 60s for crictl version
	I1205 19:19:32.161401  549077 ssh_runner.go:195] Run: which crictl
	I1205 19:19:32.165363  549077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:19:32.206408  549077 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 19:19:32.206526  549077 ssh_runner.go:195] Run: crio --version
	I1205 19:19:32.236278  549077 ssh_runner.go:195] Run: crio --version
	I1205 19:19:32.267603  549077 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 19:19:32.269318  549077 main.go:141] libmachine: (ha-106302) Calling .GetIP
	I1205 19:19:32.272307  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:32.272654  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:32.272680  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:32.272875  549077 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 19:19:32.277254  549077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:19:32.290866  549077 kubeadm.go:883] updating cluster {Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 19:19:32.290982  549077 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:19:32.291025  549077 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:19:32.327363  549077 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 19:19:32.327433  549077 ssh_runner.go:195] Run: which lz4
	I1205 19:19:32.331533  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1205 19:19:32.331639  549077 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 19:19:32.335872  549077 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 19:19:32.335904  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 19:19:33.796243  549077 crio.go:462] duration metric: took 1.464622041s to copy over tarball
	I1205 19:19:33.796360  549077 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 19:19:35.904137  549077 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.107740538s)
	I1205 19:19:35.904177  549077 crio.go:469] duration metric: took 2.107873128s to extract the tarball
	I1205 19:19:35.904188  549077 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 19:19:35.941468  549077 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:19:35.985079  549077 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 19:19:35.985107  549077 cache_images.go:84] Images are preloaded, skipping loading
	I1205 19:19:35.985116  549077 kubeadm.go:934] updating node { 192.168.39.185 8443 v1.31.2 crio true true} ...
	I1205 19:19:35.985222  549077 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-106302 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 19:19:35.985289  549077 ssh_runner.go:195] Run: crio config
	I1205 19:19:36.034780  549077 cni.go:84] Creating CNI manager for ""
	I1205 19:19:36.034806  549077 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1205 19:19:36.034818  549077 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 19:19:36.034841  549077 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-106302 NodeName:ha-106302 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 19:19:36.035004  549077 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-106302"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.185"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 19:19:36.035032  549077 kube-vip.go:115] generating kube-vip config ...
	I1205 19:19:36.035097  549077 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 19:19:36.051693  549077 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 19:19:36.051834  549077 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1205 19:19:36.051903  549077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 19:19:36.062174  549077 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 19:19:36.062270  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1205 19:19:36.072102  549077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1205 19:19:36.089037  549077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 19:19:36.105710  549077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1205 19:19:36.122352  549077 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1205 19:19:36.139382  549077 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 19:19:36.143400  549077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:19:36.156091  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:19:36.264660  549077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:19:36.281414  549077 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302 for IP: 192.168.39.185
	I1205 19:19:36.281442  549077 certs.go:194] generating shared ca certs ...
	I1205 19:19:36.281458  549077 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:36.281638  549077 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 19:19:36.281689  549077 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 19:19:36.281704  549077 certs.go:256] generating profile certs ...
	I1205 19:19:36.281767  549077 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key
	I1205 19:19:36.281786  549077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.crt with IP's: []
	I1205 19:19:36.500418  549077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.crt ...
	I1205 19:19:36.500457  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.crt: {Name:mkb14e7bfcf7e74b43ed78fd0539344fe783f416 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:36.500681  549077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key ...
	I1205 19:19:36.500700  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key: {Name:mk7e0330a0f2228d88e0f9d58264fe1f08349563 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:36.500831  549077 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.ab85f0da
	I1205 19:19:36.500858  549077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.ab85f0da with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.185 192.168.39.254]
	I1205 19:19:36.595145  549077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.ab85f0da ...
	I1205 19:19:36.595178  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.ab85f0da: {Name:mk6fe31beb668f4be09d7ef716f12b627681f889 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:36.595356  549077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.ab85f0da ...
	I1205 19:19:36.595368  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.ab85f0da: {Name:mkb2102bd03507fee93efd6f4ad4d01650f6960d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:36.595451  549077 certs.go:381] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.ab85f0da -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt
	I1205 19:19:36.595530  549077 certs.go:385] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.ab85f0da -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key
	I1205 19:19:36.595588  549077 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key
	I1205 19:19:36.595600  549077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt with IP's: []
	I1205 19:19:36.750498  549077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt ...
	I1205 19:19:36.750528  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt: {Name:mk310719ddd3b7c13526e0d5963ab5146ba62c75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:36.750689  549077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key ...
	I1205 19:19:36.750700  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key: {Name:mka21d6cd95f23029a85e314b05925420c5b8d35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:36.750768  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 19:19:36.750785  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 19:19:36.750796  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 19:19:36.750809  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 19:19:36.750819  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 19:19:36.750831  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 19:19:36.750841  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 19:19:36.750856  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 19:19:36.750907  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 19:19:36.750946  549077 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 19:19:36.750968  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 19:19:36.750995  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 19:19:36.751018  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:19:36.751046  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 19:19:36.751085  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:19:36.751157  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem -> /usr/share/ca-certificates/538186.pem
	I1205 19:19:36.751182  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /usr/share/ca-certificates/5381862.pem
	I1205 19:19:36.751197  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:19:36.751757  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:19:36.777283  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 19:19:36.800796  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:19:36.824188  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 19:19:36.847922  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 19:19:36.871853  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 19:19:36.897433  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:19:36.923449  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 19:19:36.949838  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 19:19:36.975187  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 19:19:36.999764  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:19:37.024507  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 19:19:37.044052  549077 ssh_runner.go:195] Run: openssl version
	I1205 19:19:37.052297  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 19:19:37.068345  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 19:19:37.073536  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 19:19:37.073603  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 19:19:37.080035  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 19:19:37.091136  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 19:19:37.115623  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 19:19:37.120621  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 19:19:37.120687  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 19:19:37.126618  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 19:19:37.138669  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:19:37.150853  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:19:37.155803  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:19:37.155881  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:19:37.162049  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:19:37.174819  549077 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 19:19:37.179494  549077 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 19:19:37.179570  549077 kubeadm.go:392] StartCluster: {Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:19:37.179688  549077 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 19:19:37.179745  549077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 19:19:37.223116  549077 cri.go:89] found id: ""
	I1205 19:19:37.223191  549077 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 19:19:37.234706  549077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 19:19:37.247347  549077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 19:19:37.259258  549077 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 19:19:37.259287  549077 kubeadm.go:157] found existing configuration files:
	
	I1205 19:19:37.259336  549077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 19:19:37.269699  549077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 19:19:37.269766  549077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 19:19:37.280566  549077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 19:19:37.290999  549077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 19:19:37.291070  549077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 19:19:37.302967  549077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 19:19:37.313065  549077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 19:19:37.313160  549077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 19:19:37.323523  549077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 19:19:37.333224  549077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 19:19:37.333286  549077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 19:19:37.343725  549077 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 19:19:37.465425  549077 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 19:19:37.465503  549077 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 19:19:37.563680  549077 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 19:19:37.563837  549077 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 19:19:37.563944  549077 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 19:19:37.577125  549077 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 19:19:37.767794  549077 out.go:235]   - Generating certificates and keys ...
	I1205 19:19:37.767998  549077 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 19:19:37.768133  549077 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 19:19:37.768233  549077 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 19:19:37.823275  549077 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1205 19:19:38.256538  549077 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1205 19:19:38.418481  549077 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1205 19:19:38.506453  549077 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1205 19:19:38.506612  549077 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-106302 localhost] and IPs [192.168.39.185 127.0.0.1 ::1]
	I1205 19:19:38.599268  549077 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1205 19:19:38.599504  549077 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-106302 localhost] and IPs [192.168.39.185 127.0.0.1 ::1]
	I1205 19:19:38.721006  549077 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 19:19:38.801347  549077 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 19:19:39.020781  549077 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1205 19:19:39.020849  549077 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 19:19:39.351214  549077 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 19:19:39.652426  549077 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 19:19:39.852747  549077 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 19:19:39.949305  549077 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 19:19:40.093193  549077 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 19:19:40.093754  549077 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 19:19:40.099424  549077 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 19:19:40.101578  549077 out.go:235]   - Booting up control plane ...
	I1205 19:19:40.101681  549077 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 19:19:40.101747  549077 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 19:19:40.101808  549077 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 19:19:40.118245  549077 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 19:19:40.124419  549077 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 19:19:40.124472  549077 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 19:19:40.264350  549077 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 19:19:40.264527  549077 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 19:19:40.767072  549077 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.104658ms
	I1205 19:19:40.767195  549077 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 19:19:46.889839  549077 kubeadm.go:310] [api-check] The API server is healthy after 6.126522028s
	I1205 19:19:46.903949  549077 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 19:19:46.920566  549077 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 19:19:46.959559  549077 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 19:19:46.959762  549077 kubeadm.go:310] [mark-control-plane] Marking the node ha-106302 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 19:19:46.972882  549077 kubeadm.go:310] [bootstrap-token] Using token: hftusq.bke4u9rqswjxk9ui
	I1205 19:19:46.974672  549077 out.go:235]   - Configuring RBAC rules ...
	I1205 19:19:46.974836  549077 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 19:19:46.983462  549077 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 19:19:46.993184  549077 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 19:19:47.001254  549077 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 19:19:47.006556  549077 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 19:19:47.012815  549077 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 19:19:47.297618  549077 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 19:19:47.737983  549077 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 19:19:48.297207  549077 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 19:19:48.298256  549077 kubeadm.go:310] 
	I1205 19:19:48.298332  549077 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 19:19:48.298344  549077 kubeadm.go:310] 
	I1205 19:19:48.298499  549077 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 19:19:48.298523  549077 kubeadm.go:310] 
	I1205 19:19:48.298551  549077 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 19:19:48.298654  549077 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 19:19:48.298730  549077 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 19:19:48.298740  549077 kubeadm.go:310] 
	I1205 19:19:48.298818  549077 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 19:19:48.298835  549077 kubeadm.go:310] 
	I1205 19:19:48.298894  549077 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 19:19:48.298903  549077 kubeadm.go:310] 
	I1205 19:19:48.298967  549077 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 19:19:48.299056  549077 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 19:19:48.299139  549077 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 19:19:48.299148  549077 kubeadm.go:310] 
	I1205 19:19:48.299267  549077 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 19:19:48.299368  549077 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 19:19:48.299380  549077 kubeadm.go:310] 
	I1205 19:19:48.299496  549077 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hftusq.bke4u9rqswjxk9ui \
	I1205 19:19:48.299623  549077 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 \
	I1205 19:19:48.299658  549077 kubeadm.go:310] 	--control-plane 
	I1205 19:19:48.299667  549077 kubeadm.go:310] 
	I1205 19:19:48.299787  549077 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 19:19:48.299797  549077 kubeadm.go:310] 
	I1205 19:19:48.299896  549077 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hftusq.bke4u9rqswjxk9ui \
	I1205 19:19:48.300017  549077 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 
	I1205 19:19:48.300978  549077 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 19:19:48.301019  549077 cni.go:84] Creating CNI manager for ""
	I1205 19:19:48.301039  549077 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1205 19:19:48.302992  549077 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1205 19:19:48.304422  549077 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 19:19:48.310158  549077 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1205 19:19:48.310179  549077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1205 19:19:48.330305  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 19:19:48.708578  549077 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 19:19:48.708692  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:48.708697  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-106302 minikube.k8s.io/updated_at=2024_12_05T19_19_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=ha-106302 minikube.k8s.io/primary=true
	I1205 19:19:48.766673  549077 ops.go:34] apiserver oom_adj: -16
	I1205 19:19:48.946725  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:49.447511  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:49.947827  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:50.447219  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:50.947321  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:51.447070  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:51.946846  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:52.030950  549077 kubeadm.go:1113] duration metric: took 3.322332375s to wait for elevateKubeSystemPrivileges
	I1205 19:19:52.030984  549077 kubeadm.go:394] duration metric: took 14.851420641s to StartCluster
	I1205 19:19:52.031005  549077 settings.go:142] acquiring lock: {Name:mk53b9e6d652790a330d8f10370186624dd74692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:52.031096  549077 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:19:52.032088  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:52.032382  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 19:19:52.032390  549077 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:19:52.032418  549077 start.go:241] waiting for startup goroutines ...
	I1205 19:19:52.032436  549077 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 19:19:52.032529  549077 addons.go:69] Setting storage-provisioner=true in profile "ha-106302"
	I1205 19:19:52.032562  549077 addons.go:234] Setting addon storage-provisioner=true in "ha-106302"
	I1205 19:19:52.032575  549077 addons.go:69] Setting default-storageclass=true in profile "ha-106302"
	I1205 19:19:52.032596  549077 host.go:66] Checking if "ha-106302" exists ...
	I1205 19:19:52.032603  549077 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-106302"
	I1205 19:19:52.032616  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:19:52.032974  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:19:52.033012  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:19:52.033080  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:19:52.033128  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:19:52.048867  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37355
	I1205 19:19:52.048932  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39985
	I1205 19:19:52.049474  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:19:52.049598  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:19:52.050083  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:19:52.050108  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:19:52.050196  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:19:52.050217  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:19:52.050494  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:19:52.050547  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:19:52.050740  549077 main.go:141] libmachine: (ha-106302) Calling .GetState
	I1205 19:19:52.051108  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:19:52.051156  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:19:52.053000  549077 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:19:52.053380  549077 kapi.go:59] client config for ha-106302: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.crt", KeyFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key", CAFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 19:19:52.053986  549077 cert_rotation.go:140] Starting client certificate rotation controller
	I1205 19:19:52.054434  549077 addons.go:234] Setting addon default-storageclass=true in "ha-106302"
	I1205 19:19:52.054485  549077 host.go:66] Checking if "ha-106302" exists ...
	I1205 19:19:52.054871  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:19:52.054924  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:19:52.068403  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42463
	I1205 19:19:52.069056  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:19:52.069816  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:19:52.069851  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:19:52.070279  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:19:52.070500  549077 main.go:141] libmachine: (ha-106302) Calling .GetState
	I1205 19:19:52.071258  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35709
	I1205 19:19:52.071775  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:19:52.072386  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:19:52.072414  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:19:52.072576  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:52.072784  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:19:52.073435  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:19:52.073491  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:19:52.074239  549077 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 19:19:52.075532  549077 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:19:52.075550  549077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 19:19:52.075581  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:52.079231  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:52.079693  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:52.079729  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:52.080048  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:52.080297  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:52.080464  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:52.080625  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:19:52.090582  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41111
	I1205 19:19:52.091077  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:19:52.091649  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:19:52.091690  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:19:52.092023  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:19:52.092235  549077 main.go:141] libmachine: (ha-106302) Calling .GetState
	I1205 19:19:52.093928  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:52.094164  549077 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 19:19:52.094184  549077 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 19:19:52.094204  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:52.097425  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:52.097952  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:52.097988  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:52.098172  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:52.098357  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:52.098547  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:52.098690  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:19:52.240649  549077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:19:52.260476  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 19:19:52.326335  549077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 19:19:53.107266  549077 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1205 19:19:53.107380  549077 main.go:141] libmachine: Making call to close driver server
	I1205 19:19:53.107404  549077 main.go:141] libmachine: Making call to close driver server
	I1205 19:19:53.107428  549077 main.go:141] libmachine: (ha-106302) Calling .Close
	I1205 19:19:53.107411  549077 main.go:141] libmachine: (ha-106302) Calling .Close
	I1205 19:19:53.107855  549077 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:19:53.107863  549077 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:19:53.107872  549077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:19:53.107875  549077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:19:53.107881  549077 main.go:141] libmachine: Making call to close driver server
	I1205 19:19:53.107889  549077 main.go:141] libmachine: (ha-106302) Calling .Close
	I1205 19:19:53.107898  549077 main.go:141] libmachine: Making call to close driver server
	I1205 19:19:53.107909  549077 main.go:141] libmachine: (ha-106302) Calling .Close
	I1205 19:19:53.108388  549077 main.go:141] libmachine: (ha-106302) DBG | Closing plugin on server side
	I1205 19:19:53.108430  549077 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:19:53.108447  549077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:19:53.108523  549077 main.go:141] libmachine: (ha-106302) DBG | Closing plugin on server side
	I1205 19:19:53.108536  549077 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 19:19:53.108552  549077 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 19:19:53.108666  549077 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1205 19:19:53.108672  549077 round_trippers.go:469] Request Headers:
	I1205 19:19:53.108683  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:19:53.108690  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:19:53.108977  549077 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:19:53.109004  549077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:19:53.122784  549077 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1205 19:19:53.123463  549077 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1205 19:19:53.123481  549077 round_trippers.go:469] Request Headers:
	I1205 19:19:53.123489  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:19:53.123494  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:19:53.123497  549077 round_trippers.go:473]     Content-Type: application/json
	I1205 19:19:53.127870  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:19:53.128387  549077 main.go:141] libmachine: Making call to close driver server
	I1205 19:19:53.128421  549077 main.go:141] libmachine: (ha-106302) Calling .Close
	I1205 19:19:53.128753  549077 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:19:53.128782  549077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:19:53.130618  549077 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1205 19:19:53.131922  549077 addons.go:510] duration metric: took 1.09949066s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1205 19:19:53.131966  549077 start.go:246] waiting for cluster config update ...
	I1205 19:19:53.131976  549077 start.go:255] writing updated cluster config ...
	I1205 19:19:53.133784  549077 out.go:201] 
	I1205 19:19:53.135291  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:19:53.135384  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:19:53.137100  549077 out.go:177] * Starting "ha-106302-m02" control-plane node in "ha-106302" cluster
	I1205 19:19:53.138489  549077 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:19:53.138517  549077 cache.go:56] Caching tarball of preloaded images
	I1205 19:19:53.138635  549077 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 19:19:53.138649  549077 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 19:19:53.138720  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:19:53.138982  549077 start.go:360] acquireMachinesLock for ha-106302-m02: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 19:19:53.139025  549077 start.go:364] duration metric: took 23.765µs to acquireMachinesLock for "ha-106302-m02"
	I1205 19:19:53.139048  549077 start.go:93] Provisioning new machine with config: &{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:19:53.139118  549077 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1205 19:19:53.140509  549077 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 19:19:53.140599  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:19:53.140636  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:19:53.156622  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38951
	I1205 19:19:53.157158  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:19:53.157623  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:19:53.157649  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:19:53.157947  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:19:53.158168  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetMachineName
	I1205 19:19:53.158323  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:19:53.158520  549077 start.go:159] libmachine.API.Create for "ha-106302" (driver="kvm2")
	I1205 19:19:53.158562  549077 client.go:168] LocalClient.Create starting
	I1205 19:19:53.158607  549077 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem
	I1205 19:19:53.158656  549077 main.go:141] libmachine: Decoding PEM data...
	I1205 19:19:53.158704  549077 main.go:141] libmachine: Parsing certificate...
	I1205 19:19:53.158778  549077 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem
	I1205 19:19:53.158809  549077 main.go:141] libmachine: Decoding PEM data...
	I1205 19:19:53.158825  549077 main.go:141] libmachine: Parsing certificate...
	I1205 19:19:53.158852  549077 main.go:141] libmachine: Running pre-create checks...
	I1205 19:19:53.158863  549077 main.go:141] libmachine: (ha-106302-m02) Calling .PreCreateCheck
	I1205 19:19:53.159044  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetConfigRaw
	I1205 19:19:53.159562  549077 main.go:141] libmachine: Creating machine...
	I1205 19:19:53.159580  549077 main.go:141] libmachine: (ha-106302-m02) Calling .Create
	I1205 19:19:53.159720  549077 main.go:141] libmachine: (ha-106302-m02) Creating KVM machine...
	I1205 19:19:53.161306  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found existing default KVM network
	I1205 19:19:53.161451  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found existing private KVM network mk-ha-106302
	I1205 19:19:53.161677  549077 main.go:141] libmachine: (ha-106302-m02) Setting up store path in /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02 ...
	I1205 19:19:53.161706  549077 main.go:141] libmachine: (ha-106302-m02) Building disk image from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 19:19:53.161792  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:53.161686  549462 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:19:53.161946  549077 main.go:141] libmachine: (ha-106302-m02) Downloading /home/jenkins/minikube-integration/20052-530897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 19:19:53.454907  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:53.454778  549462 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa...
	I1205 19:19:53.629727  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:53.629571  549462 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/ha-106302-m02.rawdisk...
	I1205 19:19:53.629774  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Writing magic tar header
	I1205 19:19:53.629794  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Writing SSH key tar header
	I1205 19:19:53.629802  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:53.629693  549462 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02 ...
	I1205 19:19:53.629813  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02
	I1205 19:19:53.629877  549077 main.go:141] libmachine: (ha-106302-m02) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02 (perms=drwx------)
	I1205 19:19:53.629901  549077 main.go:141] libmachine: (ha-106302-m02) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines (perms=drwxr-xr-x)
	I1205 19:19:53.629937  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines
	I1205 19:19:53.629971  549077 main.go:141] libmachine: (ha-106302-m02) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube (perms=drwxr-xr-x)
	I1205 19:19:53.629982  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:19:53.629997  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897
	I1205 19:19:53.630005  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 19:19:53.630016  549077 main.go:141] libmachine: (ha-106302-m02) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897 (perms=drwxrwxr-x)
	I1205 19:19:53.630032  549077 main.go:141] libmachine: (ha-106302-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 19:19:53.630058  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Checking permissions on dir: /home/jenkins
	I1205 19:19:53.630069  549077 main.go:141] libmachine: (ha-106302-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 19:19:53.630084  549077 main.go:141] libmachine: (ha-106302-m02) Creating domain...
	I1205 19:19:53.630098  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Checking permissions on dir: /home
	I1205 19:19:53.630111  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Skipping /home - not owner
	I1205 19:19:53.630931  549077 main.go:141] libmachine: (ha-106302-m02) define libvirt domain using xml: 
	I1205 19:19:53.630951  549077 main.go:141] libmachine: (ha-106302-m02) <domain type='kvm'>
	I1205 19:19:53.630961  549077 main.go:141] libmachine: (ha-106302-m02)   <name>ha-106302-m02</name>
	I1205 19:19:53.630968  549077 main.go:141] libmachine: (ha-106302-m02)   <memory unit='MiB'>2200</memory>
	I1205 19:19:53.630977  549077 main.go:141] libmachine: (ha-106302-m02)   <vcpu>2</vcpu>
	I1205 19:19:53.630984  549077 main.go:141] libmachine: (ha-106302-m02)   <features>
	I1205 19:19:53.630994  549077 main.go:141] libmachine: (ha-106302-m02)     <acpi/>
	I1205 19:19:53.630998  549077 main.go:141] libmachine: (ha-106302-m02)     <apic/>
	I1205 19:19:53.631006  549077 main.go:141] libmachine: (ha-106302-m02)     <pae/>
	I1205 19:19:53.631010  549077 main.go:141] libmachine: (ha-106302-m02)     
	I1205 19:19:53.631018  549077 main.go:141] libmachine: (ha-106302-m02)   </features>
	I1205 19:19:53.631023  549077 main.go:141] libmachine: (ha-106302-m02)   <cpu mode='host-passthrough'>
	I1205 19:19:53.631031  549077 main.go:141] libmachine: (ha-106302-m02)   
	I1205 19:19:53.631048  549077 main.go:141] libmachine: (ha-106302-m02)   </cpu>
	I1205 19:19:53.631078  549077 main.go:141] libmachine: (ha-106302-m02)   <os>
	I1205 19:19:53.631098  549077 main.go:141] libmachine: (ha-106302-m02)     <type>hvm</type>
	I1205 19:19:53.631107  549077 main.go:141] libmachine: (ha-106302-m02)     <boot dev='cdrom'/>
	I1205 19:19:53.631116  549077 main.go:141] libmachine: (ha-106302-m02)     <boot dev='hd'/>
	I1205 19:19:53.631124  549077 main.go:141] libmachine: (ha-106302-m02)     <bootmenu enable='no'/>
	I1205 19:19:53.631134  549077 main.go:141] libmachine: (ha-106302-m02)   </os>
	I1205 19:19:53.631143  549077 main.go:141] libmachine: (ha-106302-m02)   <devices>
	I1205 19:19:53.631154  549077 main.go:141] libmachine: (ha-106302-m02)     <disk type='file' device='cdrom'>
	I1205 19:19:53.631183  549077 main.go:141] libmachine: (ha-106302-m02)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/boot2docker.iso'/>
	I1205 19:19:53.631194  549077 main.go:141] libmachine: (ha-106302-m02)       <target dev='hdc' bus='scsi'/>
	I1205 19:19:53.631203  549077 main.go:141] libmachine: (ha-106302-m02)       <readonly/>
	I1205 19:19:53.631212  549077 main.go:141] libmachine: (ha-106302-m02)     </disk>
	I1205 19:19:53.631221  549077 main.go:141] libmachine: (ha-106302-m02)     <disk type='file' device='disk'>
	I1205 19:19:53.631237  549077 main.go:141] libmachine: (ha-106302-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 19:19:53.631252  549077 main.go:141] libmachine: (ha-106302-m02)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/ha-106302-m02.rawdisk'/>
	I1205 19:19:53.631263  549077 main.go:141] libmachine: (ha-106302-m02)       <target dev='hda' bus='virtio'/>
	I1205 19:19:53.631274  549077 main.go:141] libmachine: (ha-106302-m02)     </disk>
	I1205 19:19:53.631284  549077 main.go:141] libmachine: (ha-106302-m02)     <interface type='network'>
	I1205 19:19:53.631293  549077 main.go:141] libmachine: (ha-106302-m02)       <source network='mk-ha-106302'/>
	I1205 19:19:53.631316  549077 main.go:141] libmachine: (ha-106302-m02)       <model type='virtio'/>
	I1205 19:19:53.631331  549077 main.go:141] libmachine: (ha-106302-m02)     </interface>
	I1205 19:19:53.631344  549077 main.go:141] libmachine: (ha-106302-m02)     <interface type='network'>
	I1205 19:19:53.631354  549077 main.go:141] libmachine: (ha-106302-m02)       <source network='default'/>
	I1205 19:19:53.631367  549077 main.go:141] libmachine: (ha-106302-m02)       <model type='virtio'/>
	I1205 19:19:53.631376  549077 main.go:141] libmachine: (ha-106302-m02)     </interface>
	I1205 19:19:53.631384  549077 main.go:141] libmachine: (ha-106302-m02)     <serial type='pty'>
	I1205 19:19:53.631393  549077 main.go:141] libmachine: (ha-106302-m02)       <target port='0'/>
	I1205 19:19:53.631401  549077 main.go:141] libmachine: (ha-106302-m02)     </serial>
	I1205 19:19:53.631415  549077 main.go:141] libmachine: (ha-106302-m02)     <console type='pty'>
	I1205 19:19:53.631426  549077 main.go:141] libmachine: (ha-106302-m02)       <target type='serial' port='0'/>
	I1205 19:19:53.631434  549077 main.go:141] libmachine: (ha-106302-m02)     </console>
	I1205 19:19:53.631446  549077 main.go:141] libmachine: (ha-106302-m02)     <rng model='virtio'>
	I1205 19:19:53.631457  549077 main.go:141] libmachine: (ha-106302-m02)       <backend model='random'>/dev/random</backend>
	I1205 19:19:53.631468  549077 main.go:141] libmachine: (ha-106302-m02)     </rng>
	I1205 19:19:53.631474  549077 main.go:141] libmachine: (ha-106302-m02)     
	I1205 19:19:53.631496  549077 main.go:141] libmachine: (ha-106302-m02)     
	I1205 19:19:53.631509  549077 main.go:141] libmachine: (ha-106302-m02)   </devices>
	I1205 19:19:53.631522  549077 main.go:141] libmachine: (ha-106302-m02) </domain>
	I1205 19:19:53.631527  549077 main.go:141] libmachine: (ha-106302-m02) 
	I1205 19:19:53.638274  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:3d:5d:13 in network default
	I1205 19:19:53.638929  549077 main.go:141] libmachine: (ha-106302-m02) Ensuring networks are active...
	I1205 19:19:53.638948  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:53.639739  549077 main.go:141] libmachine: (ha-106302-m02) Ensuring network default is active
	I1205 19:19:53.639999  549077 main.go:141] libmachine: (ha-106302-m02) Ensuring network mk-ha-106302 is active
	I1205 19:19:53.640360  549077 main.go:141] libmachine: (ha-106302-m02) Getting domain xml...
	I1205 19:19:53.640970  549077 main.go:141] libmachine: (ha-106302-m02) Creating domain...
	I1205 19:19:54.858939  549077 main.go:141] libmachine: (ha-106302-m02) Waiting to get IP...
	I1205 19:19:54.859905  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:54.860367  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:54.860447  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:54.860358  549462 retry.go:31] will retry after 210.406566ms: waiting for machine to come up
	I1205 19:19:55.072865  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:55.073270  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:55.073303  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:55.073236  549462 retry.go:31] will retry after 380.564554ms: waiting for machine to come up
	I1205 19:19:55.456055  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:55.456633  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:55.456664  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:55.456575  549462 retry.go:31] will retry after 318.906554ms: waiting for machine to come up
	I1205 19:19:55.777216  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:55.777679  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:55.777710  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:55.777619  549462 retry.go:31] will retry after 557.622429ms: waiting for machine to come up
	I1205 19:19:56.337019  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:56.337517  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:56.337547  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:56.337452  549462 retry.go:31] will retry after 733.803738ms: waiting for machine to come up
	I1205 19:19:57.072993  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:57.073519  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:57.073554  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:57.073464  549462 retry.go:31] will retry after 792.053725ms: waiting for machine to come up
	I1205 19:19:57.866686  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:57.867255  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:57.867284  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:57.867204  549462 retry.go:31] will retry after 899.083916ms: waiting for machine to come up
	I1205 19:19:58.767474  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:58.767846  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:58.767879  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:58.767799  549462 retry.go:31] will retry after 894.520794ms: waiting for machine to come up
	I1205 19:19:59.663948  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:59.664483  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:59.664517  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:59.664431  549462 retry.go:31] will retry after 1.445971502s: waiting for machine to come up
	I1205 19:20:01.112081  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:01.112472  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:20:01.112497  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:20:01.112419  549462 retry.go:31] will retry after 2.114052847s: waiting for machine to come up
	I1205 19:20:03.228602  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:03.229091  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:20:03.229116  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:20:03.229037  549462 retry.go:31] will retry after 2.786335133s: waiting for machine to come up
	I1205 19:20:06.019023  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:06.019472  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:20:06.019494  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:20:06.019436  549462 retry.go:31] will retry after 3.312152878s: waiting for machine to come up
	I1205 19:20:09.332971  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:09.333454  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:20:09.333485  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:20:09.333375  549462 retry.go:31] will retry after 4.193621264s: waiting for machine to come up
	I1205 19:20:13.528190  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:13.528561  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:20:13.528582  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:20:13.528513  549462 retry.go:31] will retry after 5.505002432s: waiting for machine to come up
	I1205 19:20:19.035383  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:19.035839  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has current primary IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:19.035869  549077 main.go:141] libmachine: (ha-106302-m02) Found IP for machine: 192.168.39.22
	I1205 19:20:19.035884  549077 main.go:141] libmachine: (ha-106302-m02) Reserving static IP address...
	I1205 19:20:19.036316  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find host DHCP lease matching {name: "ha-106302-m02", mac: "52:54:00:50:91:17", ip: "192.168.39.22"} in network mk-ha-106302
	I1205 19:20:19.111128  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Getting to WaitForSSH function...
	I1205 19:20:19.111162  549077 main.go:141] libmachine: (ha-106302-m02) Reserved static IP address: 192.168.39.22
	I1205 19:20:19.111175  549077 main.go:141] libmachine: (ha-106302-m02) Waiting for SSH to be available...
	I1205 19:20:19.113732  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:19.114085  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302
	I1205 19:20:19.114114  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find defined IP address of network mk-ha-106302 interface with MAC address 52:54:00:50:91:17
	I1205 19:20:19.114257  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Using SSH client type: external
	I1205 19:20:19.114278  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa (-rw-------)
	I1205 19:20:19.114319  549077 main.go:141] libmachine: (ha-106302-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 19:20:19.114332  549077 main.go:141] libmachine: (ha-106302-m02) DBG | About to run SSH command:
	I1205 19:20:19.114349  549077 main.go:141] libmachine: (ha-106302-m02) DBG | exit 0
	I1205 19:20:19.118035  549077 main.go:141] libmachine: (ha-106302-m02) DBG | SSH cmd err, output: exit status 255: 
	I1205 19:20:19.118057  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1205 19:20:19.118065  549077 main.go:141] libmachine: (ha-106302-m02) DBG | command : exit 0
	I1205 19:20:19.118070  549077 main.go:141] libmachine: (ha-106302-m02) DBG | err     : exit status 255
	I1205 19:20:19.118077  549077 main.go:141] libmachine: (ha-106302-m02) DBG | output  : 
	I1205 19:20:22.120219  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Getting to WaitForSSH function...
	I1205 19:20:22.122541  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.122838  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.122871  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.122905  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Using SSH client type: external
	I1205 19:20:22.122934  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa (-rw-------)
	I1205 19:20:22.122975  549077 main.go:141] libmachine: (ha-106302-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.22 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 19:20:22.122988  549077 main.go:141] libmachine: (ha-106302-m02) DBG | About to run SSH command:
	I1205 19:20:22.122997  549077 main.go:141] libmachine: (ha-106302-m02) DBG | exit 0
	I1205 19:20:22.248910  549077 main.go:141] libmachine: (ha-106302-m02) DBG | SSH cmd err, output: <nil>: 
	I1205 19:20:22.249203  549077 main.go:141] libmachine: (ha-106302-m02) KVM machine creation complete!
	I1205 19:20:22.249549  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetConfigRaw
	I1205 19:20:22.250245  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:20:22.250531  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:20:22.250724  549077 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 19:20:22.250739  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetState
	I1205 19:20:22.252145  549077 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 19:20:22.252159  549077 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 19:20:22.252171  549077 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 19:20:22.252176  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:22.255218  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.255608  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.255639  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.255817  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:22.256017  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.256246  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.256424  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:22.256663  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:20:22.256916  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1205 19:20:22.256931  549077 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 19:20:22.368260  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:20:22.368313  549077 main.go:141] libmachine: Detecting the provisioner...
	I1205 19:20:22.368324  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:22.371040  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.371460  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.371481  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.371672  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:22.371891  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.372059  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.372173  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:22.372389  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:20:22.372564  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1205 19:20:22.372578  549077 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 19:20:22.485513  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 19:20:22.485607  549077 main.go:141] libmachine: found compatible host: buildroot
	I1205 19:20:22.485621  549077 main.go:141] libmachine: Provisioning with buildroot...
	I1205 19:20:22.485637  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetMachineName
	I1205 19:20:22.485917  549077 buildroot.go:166] provisioning hostname "ha-106302-m02"
	I1205 19:20:22.485951  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetMachineName
	I1205 19:20:22.486197  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:22.489137  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.489476  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.489498  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.489650  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:22.489844  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.489970  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.490109  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:22.490248  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:20:22.490464  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1205 19:20:22.490479  549077 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-106302-m02 && echo "ha-106302-m02" | sudo tee /etc/hostname
	I1205 19:20:22.616293  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-106302-m02
	
	I1205 19:20:22.616334  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:22.618960  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.619345  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.619376  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.619593  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:22.619776  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.619933  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.620106  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:22.620296  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:20:22.620475  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1205 19:20:22.620492  549077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-106302-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-106302-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-106302-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:20:22.738362  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:20:22.738404  549077 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 19:20:22.738463  549077 buildroot.go:174] setting up certificates
	I1205 19:20:22.738483  549077 provision.go:84] configureAuth start
	I1205 19:20:22.738504  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetMachineName
	I1205 19:20:22.738844  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetIP
	I1205 19:20:22.741581  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.741992  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.742022  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.742170  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:22.744256  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.744573  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.744600  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.744740  549077 provision.go:143] copyHostCerts
	I1205 19:20:22.744774  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:20:22.744818  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 19:20:22.744828  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:20:22.744891  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 19:20:22.744975  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:20:22.744994  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 19:20:22.745000  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:20:22.745024  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 19:20:22.745615  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:20:22.745684  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 19:20:22.745691  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:20:22.745739  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 19:20:22.745877  549077 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.ha-106302-m02 san=[127.0.0.1 192.168.39.22 ha-106302-m02 localhost minikube]
	I1205 19:20:22.796359  549077 provision.go:177] copyRemoteCerts
	I1205 19:20:22.796421  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:20:22.796448  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:22.799357  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.799732  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.799766  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.799995  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:22.800198  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.800385  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:22.800538  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa Username:docker}
	I1205 19:20:22.887828  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 19:20:22.887929  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 19:20:22.916212  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 19:20:22.916319  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 19:20:22.941232  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 19:20:22.941341  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 19:20:22.967161  549077 provision.go:87] duration metric: took 228.658819ms to configureAuth
	I1205 19:20:22.967199  549077 buildroot.go:189] setting minikube options for container-runtime
	I1205 19:20:22.967392  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:20:22.967485  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:22.970286  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.970715  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.970749  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.970939  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:22.971156  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.971320  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.971433  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:22.971580  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:20:22.971846  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1205 19:20:22.971863  549077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:20:23.207888  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:20:23.207924  549077 main.go:141] libmachine: Checking connection to Docker...
	I1205 19:20:23.207935  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetURL
	I1205 19:20:23.209276  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Using libvirt version 6000000
	I1205 19:20:23.211506  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.211907  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:23.211936  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.212208  549077 main.go:141] libmachine: Docker is up and running!
	I1205 19:20:23.212224  549077 main.go:141] libmachine: Reticulating splines...
	I1205 19:20:23.212232  549077 client.go:171] duration metric: took 30.053657655s to LocalClient.Create
	I1205 19:20:23.212256  549077 start.go:167] duration metric: took 30.053742841s to libmachine.API.Create "ha-106302"
	I1205 19:20:23.212293  549077 start.go:293] postStartSetup for "ha-106302-m02" (driver="kvm2")
	I1205 19:20:23.212310  549077 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:20:23.212333  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:20:23.212577  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:20:23.212606  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:23.215114  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.215516  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:23.215546  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.215705  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:23.215924  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:23.216106  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:23.216253  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa Username:docker}
	I1205 19:20:23.304000  549077 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:20:23.308581  549077 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 19:20:23.308614  549077 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 19:20:23.308698  549077 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 19:20:23.308795  549077 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 19:20:23.308810  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /etc/ssl/certs/5381862.pem
	I1205 19:20:23.308927  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 19:20:23.319412  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:20:23.344460  549077 start.go:296] duration metric: took 132.146002ms for postStartSetup
	I1205 19:20:23.344545  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetConfigRaw
	I1205 19:20:23.345277  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetIP
	I1205 19:20:23.348207  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.348665  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:23.348693  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.348984  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:20:23.349202  549077 start.go:128] duration metric: took 30.210071126s to createHost
	I1205 19:20:23.349267  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:23.351860  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.352216  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:23.352247  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.352437  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:23.352631  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:23.352819  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:23.352959  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:23.353129  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:20:23.353382  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1205 19:20:23.353399  549077 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 19:20:23.465312  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733426423.446273328
	
	I1205 19:20:23.465337  549077 fix.go:216] guest clock: 1733426423.446273328
	I1205 19:20:23.465346  549077 fix.go:229] Guest: 2024-12-05 19:20:23.446273328 +0000 UTC Remote: 2024-12-05 19:20:23.349227376 +0000 UTC m=+77.722963766 (delta=97.045952ms)
	I1205 19:20:23.465364  549077 fix.go:200] guest clock delta is within tolerance: 97.045952ms
	I1205 19:20:23.465370  549077 start.go:83] releasing machines lock for "ha-106302-m02", held for 30.326335436s
	I1205 19:20:23.465398  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:20:23.465708  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetIP
	I1205 19:20:23.468308  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.468731  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:23.468764  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.471281  549077 out.go:177] * Found network options:
	I1205 19:20:23.472818  549077 out.go:177]   - NO_PROXY=192.168.39.185
	W1205 19:20:23.473976  549077 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 19:20:23.474014  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:20:23.474583  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:20:23.474762  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:20:23.474896  549077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:20:23.474942  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	W1205 19:20:23.474975  549077 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 19:20:23.475049  549077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:20:23.475075  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:23.477606  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.477936  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.477969  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:23.477989  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.478113  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:23.478273  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:23.478379  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:23.478405  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.478432  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:23.478613  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:23.478614  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa Username:docker}
	I1205 19:20:23.478752  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:23.478903  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:23.479088  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa Username:docker}
	I1205 19:20:23.717492  549077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 19:20:23.724398  549077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 19:20:23.724467  549077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:20:23.742377  549077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 19:20:23.742416  549077 start.go:495] detecting cgroup driver to use...
	I1205 19:20:23.742481  549077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:20:23.759474  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:20:23.774720  549077 docker.go:217] disabling cri-docker service (if available) ...
	I1205 19:20:23.774808  549077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:20:23.790887  549077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:20:23.807005  549077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:20:23.919834  549077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:20:24.073552  549077 docker.go:233] disabling docker service ...
	I1205 19:20:24.073644  549077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:20:24.088648  549077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:20:24.103156  549077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:20:24.227966  549077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:20:24.343808  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:20:24.359016  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:20:24.378372  549077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 19:20:24.378434  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:20:24.390093  549077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:20:24.390163  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:20:24.402052  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:20:24.413868  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:20:24.425063  549077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:20:24.436756  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:20:24.448351  549077 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:20:24.466246  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:20:24.477646  549077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:20:24.487958  549077 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 19:20:24.488022  549077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 19:20:24.504864  549077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:20:24.516929  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:20:24.650055  549077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:20:24.749984  549077 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:20:24.750068  549077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:20:24.754929  549077 start.go:563] Will wait 60s for crictl version
	I1205 19:20:24.754993  549077 ssh_runner.go:195] Run: which crictl
	I1205 19:20:24.758880  549077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:20:24.803432  549077 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 19:20:24.803519  549077 ssh_runner.go:195] Run: crio --version
	I1205 19:20:24.832773  549077 ssh_runner.go:195] Run: crio --version
	I1205 19:20:24.866071  549077 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 19:20:24.867336  549077 out.go:177]   - env NO_PROXY=192.168.39.185
	I1205 19:20:24.868566  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetIP
	I1205 19:20:24.871432  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:24.871918  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:24.871951  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:24.872171  549077 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 19:20:24.876554  549077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:20:24.890047  549077 mustload.go:65] Loading cluster: ha-106302
	I1205 19:20:24.890241  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:20:24.890558  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:20:24.890603  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:20:24.905579  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32811
	I1205 19:20:24.906049  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:20:24.906603  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:20:24.906625  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:20:24.906945  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:20:24.907214  549077 main.go:141] libmachine: (ha-106302) Calling .GetState
	I1205 19:20:24.908815  549077 host.go:66] Checking if "ha-106302" exists ...
	I1205 19:20:24.909241  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:20:24.909290  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:20:24.924888  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35263
	I1205 19:20:24.925342  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:20:24.925844  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:20:24.925864  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:20:24.926328  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:20:24.926542  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:20:24.926741  549077 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302 for IP: 192.168.39.22
	I1205 19:20:24.926754  549077 certs.go:194] generating shared ca certs ...
	I1205 19:20:24.926770  549077 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:20:24.926902  549077 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 19:20:24.926939  549077 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 19:20:24.926948  549077 certs.go:256] generating profile certs ...
	I1205 19:20:24.927023  549077 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key
	I1205 19:20:24.927047  549077 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.842d328c
	I1205 19:20:24.927061  549077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.842d328c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.185 192.168.39.22 192.168.39.254]
	I1205 19:20:25.018998  549077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.842d328c ...
	I1205 19:20:25.019030  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.842d328c: {Name:mkb73e87a5bbbf4f4c79d1fb041b857c135f5f2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:20:25.019217  549077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.842d328c ...
	I1205 19:20:25.019230  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.842d328c: {Name:mk2fba0e13caab29e22d03865232eceeba478b3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:20:25.019304  549077 certs.go:381] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.842d328c -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt
	I1205 19:20:25.019444  549077 certs.go:385] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.842d328c -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key
	I1205 19:20:25.019581  549077 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key
	I1205 19:20:25.019598  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 19:20:25.019611  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 19:20:25.019630  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 19:20:25.019645  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 19:20:25.019658  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 19:20:25.019670  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 19:20:25.019681  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 19:20:25.019693  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 19:20:25.019742  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 19:20:25.019769  549077 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 19:20:25.019780  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 19:20:25.019800  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 19:20:25.019822  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:20:25.019843  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 19:20:25.019881  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:20:25.019905  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /usr/share/ca-certificates/5381862.pem
	I1205 19:20:25.019919  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:20:25.019931  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem -> /usr/share/ca-certificates/538186.pem
	I1205 19:20:25.019965  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:20:25.022938  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:20:25.023319  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:20:25.023341  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:20:25.023553  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:20:25.023832  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:20:25.024047  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:20:25.024204  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:20:25.100678  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1205 19:20:25.110731  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1205 19:20:25.125160  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1205 19:20:25.130012  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1205 19:20:25.140972  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1205 19:20:25.146148  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1205 19:20:25.157617  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1205 19:20:25.162172  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1205 19:20:25.173149  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1205 19:20:25.178465  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1205 19:20:25.189406  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1205 19:20:25.193722  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1205 19:20:25.206028  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:20:25.233287  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 19:20:25.261305  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:20:25.289482  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 19:20:25.316415  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1205 19:20:25.342226  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 19:20:25.368246  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:20:25.393426  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 19:20:25.419609  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 19:20:25.445786  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:20:25.469979  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 19:20:25.493824  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1205 19:20:25.510843  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1205 19:20:25.527645  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1205 19:20:25.545705  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1205 19:20:25.563452  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1205 19:20:25.580089  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1205 19:20:25.596848  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1205 19:20:25.613807  549077 ssh_runner.go:195] Run: openssl version
	I1205 19:20:25.619697  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 19:20:25.630983  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 19:20:25.635623  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 19:20:25.635686  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 19:20:25.641677  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 19:20:25.653239  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:20:25.664932  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:20:25.669827  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:20:25.669897  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:20:25.675619  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:20:25.687127  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 19:20:25.698338  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 19:20:25.702836  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 19:20:25.702900  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 19:20:25.708667  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 19:20:25.720085  549077 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 19:20:25.724316  549077 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 19:20:25.724377  549077 kubeadm.go:934] updating node {m02 192.168.39.22 8443 v1.31.2 crio true true} ...
	I1205 19:20:25.724468  549077 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-106302-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 19:20:25.724495  549077 kube-vip.go:115] generating kube-vip config ...
	I1205 19:20:25.724527  549077 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 19:20:25.742381  549077 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 19:20:25.742481  549077 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1205 19:20:25.742576  549077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 19:20:25.753160  549077 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1205 19:20:25.753241  549077 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1205 19:20:25.763396  549077 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1205 19:20:25.763426  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 19:20:25.763482  549077 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 19:20:25.763508  549077 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1205 19:20:25.763539  549077 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1205 19:20:25.767948  549077 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1205 19:20:25.767974  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1205 19:20:27.082938  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 19:20:27.083030  549077 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 19:20:27.089029  549077 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1205 19:20:27.089083  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1205 19:20:27.157306  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:20:27.187033  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 19:20:27.187142  549077 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 19:20:27.195317  549077 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1205 19:20:27.195366  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1205 19:20:27.686796  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1205 19:20:27.697152  549077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1205 19:20:27.715018  549077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 19:20:27.734908  549077 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1205 19:20:27.752785  549077 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 19:20:27.756906  549077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:20:27.769582  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:20:27.907328  549077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:20:27.931860  549077 host.go:66] Checking if "ha-106302" exists ...
	I1205 19:20:27.932222  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:20:27.932282  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:20:27.948463  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40951
	I1205 19:20:27.949044  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:20:27.949565  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:20:27.949592  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:20:27.949925  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:20:27.950146  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:20:27.950314  549077 start.go:317] joinCluster: &{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:20:27.950422  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1205 19:20:27.950440  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:20:27.953425  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:20:27.953881  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:20:27.953912  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:20:27.954070  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:20:27.954316  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:20:27.954453  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:20:27.954606  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:20:28.113909  549077 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:20:28.113956  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kqxul8.esbt6vl0oo3pylcw --discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-106302-m02 --control-plane --apiserver-advertise-address=192.168.39.22 --apiserver-bind-port=8443"
	I1205 19:20:49.921346  549077 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kqxul8.esbt6vl0oo3pylcw --discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-106302-m02 --control-plane --apiserver-advertise-address=192.168.39.22 --apiserver-bind-port=8443": (21.80735449s)
	I1205 19:20:49.921399  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1205 19:20:50.372592  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-106302-m02 minikube.k8s.io/updated_at=2024_12_05T19_20_50_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=ha-106302 minikube.k8s.io/primary=false
	I1205 19:20:50.546557  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-106302-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1205 19:20:50.670851  549077 start.go:319] duration metric: took 22.720530002s to joinCluster
	I1205 19:20:50.670996  549077 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:20:50.671311  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:20:50.672473  549077 out.go:177] * Verifying Kubernetes components...
	I1205 19:20:50.673807  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:20:50.984620  549077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:20:51.019677  549077 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:20:51.020052  549077 kapi.go:59] client config for ha-106302: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.crt", KeyFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key", CAFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1205 19:20:51.020153  549077 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.185:8443
	I1205 19:20:51.020526  549077 node_ready.go:35] waiting up to 6m0s for node "ha-106302-m02" to be "Ready" ...
	I1205 19:20:51.020686  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:51.020701  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:51.020713  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:51.020723  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:51.041602  549077 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I1205 19:20:51.521579  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:51.521608  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:51.521618  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:51.521624  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:51.528072  549077 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 19:20:52.021672  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:52.021725  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:52.021737  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:52.021745  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:52.033142  549077 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1205 19:20:52.521343  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:52.521374  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:52.521385  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:52.521392  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:52.538251  549077 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1205 19:20:53.021297  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:53.021332  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:53.021341  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:53.021348  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:53.024986  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:53.025544  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:20:53.521241  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:53.521267  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:53.521276  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:53.521280  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:53.524346  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:54.021533  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:54.021555  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:54.021563  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:54.021566  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:54.024867  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:54.521530  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:54.521559  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:54.521573  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:54.521579  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:54.525086  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:55.020940  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:55.020967  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:55.020978  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:55.020982  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:55.024965  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:55.521541  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:55.521567  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:55.521578  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:55.521583  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:55.524843  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:55.525513  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:20:56.021561  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:56.021592  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:56.021605  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:56.021613  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:56.032511  549077 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1205 19:20:56.521545  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:56.521569  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:56.521578  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:56.521582  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:56.525173  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:57.021393  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:57.021418  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:57.021428  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:57.021452  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:57.024653  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:57.521602  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:57.521630  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:57.521642  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:57.521648  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:57.524714  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:58.021076  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:58.021102  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:58.021111  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:58.021115  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:58.024741  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:58.025390  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:20:58.521263  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:58.521301  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:58.521311  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:58.521316  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:58.524604  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:59.021545  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:59.021570  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:59.021579  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:59.021585  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:59.025044  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:59.521104  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:59.521130  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:59.521139  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:59.521142  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:59.524601  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:00.021726  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:00.021752  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:00.021761  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:00.021765  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:00.025155  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:00.025976  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:21:00.521405  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:00.521429  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:00.521438  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:00.521443  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:00.524889  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:01.021527  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:01.021552  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:01.021564  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:01.021570  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:01.025273  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:01.521362  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:01.521386  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:01.521395  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:01.521400  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:01.525347  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:02.021591  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:02.021615  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:02.021624  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:02.021629  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:02.025220  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:02.521521  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:02.521548  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:02.521557  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:02.521562  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:02.524828  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:02.525818  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:21:03.021696  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:03.021722  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:03.021731  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:03.021735  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:03.025467  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:03.521081  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:03.521106  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:03.521115  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:03.521118  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:03.525582  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:21:04.021546  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:04.021570  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:04.021579  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:04.021583  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:04.025004  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:04.520903  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:04.520929  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:04.520937  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:04.520942  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:04.524427  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:05.021518  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:05.021545  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:05.021554  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:05.021557  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:05.025066  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:05.025792  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:21:05.520844  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:05.520870  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:05.520880  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:05.520885  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:05.524450  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:06.021705  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:06.021737  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:06.021750  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:06.021757  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:06.028871  549077 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1205 19:21:06.520789  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:06.520815  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:06.520824  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:06.520829  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:06.524081  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:07.021065  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:07.021090  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:07.021099  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:07.021104  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:07.025141  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:21:07.521099  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:07.521129  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:07.521139  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:07.521142  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:07.524645  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:07.525369  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:21:08.021173  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:08.021197  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:08.021205  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:08.021211  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:08.024992  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:08.520960  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:08.520986  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:08.520994  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:08.521000  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:08.526502  549077 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 19:21:09.021508  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:09.021532  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:09.021541  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:09.021545  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:09.024675  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:09.521594  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:09.521619  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:09.521628  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:09.521631  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:09.525284  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:09.525956  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:21:10.021222  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:10.021257  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.021266  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.021271  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.024522  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:10.025029  549077 node_ready.go:49] node "ha-106302-m02" has status "Ready":"True"
	I1205 19:21:10.025048  549077 node_ready.go:38] duration metric: took 19.004494335s for node "ha-106302-m02" to be "Ready" ...
	I1205 19:21:10.025058  549077 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:21:10.025143  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:21:10.025161  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.025168  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.025172  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.029254  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:21:10.037343  549077 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-45m77" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.037449  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-45m77
	I1205 19:21:10.037458  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.037466  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.037471  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.041083  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:10.041839  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:10.041858  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.041871  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.041877  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.045415  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:10.045998  549077 pod_ready.go:93] pod "coredns-7c65d6cfc9-45m77" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:10.046023  549077 pod_ready.go:82] duration metric: took 8.64868ms for pod "coredns-7c65d6cfc9-45m77" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.046036  549077 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sjsv2" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.046126  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sjsv2
	I1205 19:21:10.046137  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.046148  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.046157  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.048885  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:21:10.049682  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:10.049701  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.049711  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.049719  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.052106  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:21:10.052838  549077 pod_ready.go:93] pod "coredns-7c65d6cfc9-sjsv2" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:10.052859  549077 pod_ready.go:82] duration metric: took 6.814644ms for pod "coredns-7c65d6cfc9-sjsv2" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.052870  549077 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.052943  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/etcd-ha-106302
	I1205 19:21:10.052958  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.052969  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.052977  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.055429  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:21:10.056066  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:10.056082  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.056091  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.056098  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.058521  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:21:10.059123  549077 pod_ready.go:93] pod "etcd-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:10.059143  549077 pod_ready.go:82] duration metric: took 6.26496ms for pod "etcd-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.059152  549077 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.059214  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/etcd-ha-106302-m02
	I1205 19:21:10.059222  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.059229  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.059234  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.061697  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:21:10.062341  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:10.062358  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.062365  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.062369  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.064629  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:21:10.065300  549077 pod_ready.go:93] pod "etcd-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:10.065321  549077 pod_ready.go:82] duration metric: took 6.163254ms for pod "etcd-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.065335  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.221800  549077 request.go:632] Waited for 156.353212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302
	I1205 19:21:10.221879  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302
	I1205 19:21:10.221887  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.221896  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.221902  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.225800  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:10.421906  549077 request.go:632] Waited for 195.38917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:10.421986  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:10.421994  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.422009  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.422020  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.425349  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:10.426055  549077 pod_ready.go:93] pod "kube-apiserver-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:10.426080  549077 pod_ready.go:82] duration metric: took 360.734464ms for pod "kube-apiserver-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.426094  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.622166  549077 request.go:632] Waited for 195.985328ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302-m02
	I1205 19:21:10.622258  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302-m02
	I1205 19:21:10.622264  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.622274  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.622278  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.626000  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:10.822214  549077 request.go:632] Waited for 195.406875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:10.822287  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:10.822292  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.822300  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.822313  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.825573  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:10.826254  549077 pod_ready.go:93] pod "kube-apiserver-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:10.826276  549077 pod_ready.go:82] duration metric: took 400.173601ms for pod "kube-apiserver-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.826290  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:11.021260  549077 request.go:632] Waited for 194.873219ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302
	I1205 19:21:11.021346  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302
	I1205 19:21:11.021355  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:11.021363  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:11.021370  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:11.024811  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:11.221934  549077 request.go:632] Waited for 196.368194ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:11.222013  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:11.222048  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:11.222064  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:11.222069  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:11.226121  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:21:11.226777  549077 pod_ready.go:93] pod "kube-controller-manager-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:11.226804  549077 pod_ready.go:82] duration metric: took 400.496709ms for pod "kube-controller-manager-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:11.226817  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:11.421793  549077 request.go:632] Waited for 194.889039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302-m02
	I1205 19:21:11.421939  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302-m02
	I1205 19:21:11.421953  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:11.421962  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:11.421966  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:11.425791  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:11.621786  549077 request.go:632] Waited for 195.325808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:11.621884  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:11.621897  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:11.621912  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:11.621921  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:11.626156  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:21:11.626616  549077 pod_ready.go:93] pod "kube-controller-manager-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:11.626639  549077 pod_ready.go:82] duration metric: took 399.812324ms for pod "kube-controller-manager-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:11.626651  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n57lf" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:11.821729  549077 request.go:632] Waited for 194.997004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n57lf
	I1205 19:21:11.821817  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n57lf
	I1205 19:21:11.821822  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:11.821831  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:11.821838  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:11.825718  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:12.021841  549077 request.go:632] Waited for 195.410535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:12.021958  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:12.021969  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:12.021977  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:12.021984  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:12.025441  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:12.025999  549077 pod_ready.go:93] pod "kube-proxy-n57lf" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:12.026021  549077 pod_ready.go:82] duration metric: took 399.361827ms for pod "kube-proxy-n57lf" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:12.026047  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zw6nj" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:12.222118  549077 request.go:632] Waited for 195.969624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zw6nj
	I1205 19:21:12.222187  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zw6nj
	I1205 19:21:12.222192  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:12.222200  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:12.222204  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:12.225785  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:12.422070  549077 request.go:632] Waited for 195.377811ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:12.422132  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:12.422137  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:12.422145  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:12.422149  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:12.426002  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:12.426709  549077 pod_ready.go:93] pod "kube-proxy-zw6nj" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:12.426735  549077 pod_ready.go:82] duration metric: took 400.678816ms for pod "kube-proxy-zw6nj" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:12.426748  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:12.621608  549077 request.go:632] Waited for 194.758143ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302
	I1205 19:21:12.621678  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302
	I1205 19:21:12.621683  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:12.621691  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:12.621699  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:12.625056  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:12.822084  549077 request.go:632] Waited for 196.278548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:12.822154  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:12.822166  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:12.822175  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:12.822178  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:12.826187  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:12.827028  549077 pod_ready.go:93] pod "kube-scheduler-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:12.827048  549077 pod_ready.go:82] duration metric: took 400.290627ms for pod "kube-scheduler-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:12.827061  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:13.021645  549077 request.go:632] Waited for 194.500049ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302-m02
	I1205 19:21:13.021737  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302-m02
	I1205 19:21:13.021746  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:13.021787  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:13.021795  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:13.025431  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:13.221555  549077 request.go:632] Waited for 195.53176ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:13.221632  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:13.221641  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:13.221652  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:13.221657  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:13.226002  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:21:13.226628  549077 pod_ready.go:93] pod "kube-scheduler-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:13.226651  549077 pod_ready.go:82] duration metric: took 399.582286ms for pod "kube-scheduler-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:13.226663  549077 pod_ready.go:39] duration metric: took 3.201594435s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:21:13.226683  549077 api_server.go:52] waiting for apiserver process to appear ...
	I1205 19:21:13.226740  549077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 19:21:13.244668  549077 api_server.go:72] duration metric: took 22.573625009s to wait for apiserver process to appear ...
	I1205 19:21:13.244706  549077 api_server.go:88] waiting for apiserver healthz status ...
	I1205 19:21:13.244737  549077 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I1205 19:21:13.252149  549077 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I1205 19:21:13.252242  549077 round_trippers.go:463] GET https://192.168.39.185:8443/version
	I1205 19:21:13.252252  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:13.252260  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:13.252283  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:13.253152  549077 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1205 19:21:13.253251  549077 api_server.go:141] control plane version: v1.31.2
	I1205 19:21:13.253269  549077 api_server.go:131] duration metric: took 8.556554ms to wait for apiserver health ...
	I1205 19:21:13.253277  549077 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 19:21:13.421707  549077 request.go:632] Waited for 168.323563ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:21:13.421778  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:21:13.421784  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:13.421803  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:13.421808  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:13.428060  549077 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 19:21:13.433027  549077 system_pods.go:59] 17 kube-system pods found
	I1205 19:21:13.433063  549077 system_pods.go:61] "coredns-7c65d6cfc9-45m77" [88196078-5292-43dc-84b2-dc53af435e5c] Running
	I1205 19:21:13.433069  549077 system_pods.go:61] "coredns-7c65d6cfc9-sjsv2" [b686cbc5-1b4f-44ea-89cb-70063b687718] Running
	I1205 19:21:13.433073  549077 system_pods.go:61] "etcd-ha-106302" [b0c81234-5186-4812-a1a2-4f035f9efabf] Running
	I1205 19:21:13.433076  549077 system_pods.go:61] "etcd-ha-106302-m02" [8c619411-697a-4eb0-8725-27811a17aba1] Running
	I1205 19:21:13.433079  549077 system_pods.go:61] "kindnet-thcsp" [e2eec41c-3ca9-42ff-801d-dfdf05f6eab2] Running
	I1205 19:21:13.433083  549077 system_pods.go:61] "kindnet-xr9mh" [2044800c-f517-439e-810b-71a114cb044e] Running
	I1205 19:21:13.433087  549077 system_pods.go:61] "kube-apiserver-ha-106302" [688ddac9-2f42-4e6b-b9e8-a9c967a7180b] Running
	I1205 19:21:13.433090  549077 system_pods.go:61] "kube-apiserver-ha-106302-m02" [ad05d27e-72e0-443e-8ad3-2d464c116f27] Running
	I1205 19:21:13.433094  549077 system_pods.go:61] "kube-controller-manager-ha-106302" [e63c5a4d-c327-4040-b679-62b5b06abec9] Running
	I1205 19:21:13.433097  549077 system_pods.go:61] "kube-controller-manager-ha-106302-m02" [fe707148-d0c6-4de3-841f-3a8143fa9217] Running
	I1205 19:21:13.433101  549077 system_pods.go:61] "kube-proxy-n57lf" [94819792-89fc-4a70-a54f-02e594b657bf] Running
	I1205 19:21:13.433104  549077 system_pods.go:61] "kube-proxy-zw6nj" [d35e1426-9151-4eb3-95fd-c2b36c126b51] Running
	I1205 19:21:13.433107  549077 system_pods.go:61] "kube-scheduler-ha-106302" [6dd32258-0ba3-4f79-8d4b-165b918bbc36] Running
	I1205 19:21:13.433110  549077 system_pods.go:61] "kube-scheduler-ha-106302-m02" [b94b6bf9-4639-47d1-92be-0cbba44e65f3] Running
	I1205 19:21:13.433114  549077 system_pods.go:61] "kube-vip-ha-106302" [03b99453-c78d-4aaf-93e8-7011ae363db4] Running
	I1205 19:21:13.433119  549077 system_pods.go:61] "kube-vip-ha-106302-m02" [2ec94818-bc15-4d60-95b4-e7f7235f0341] Running
	I1205 19:21:13.433125  549077 system_pods.go:61] "storage-provisioner" [88d6e224-b304-4f84-a162-9803400c9acf] Running
	I1205 19:21:13.433131  549077 system_pods.go:74] duration metric: took 179.848181ms to wait for pod list to return data ...
	I1205 19:21:13.433140  549077 default_sa.go:34] waiting for default service account to be created ...
	I1205 19:21:13.621481  549077 request.go:632] Waited for 188.228658ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/default/serviceaccounts
	I1205 19:21:13.621548  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/default/serviceaccounts
	I1205 19:21:13.621554  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:13.621562  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:13.621566  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:13.625432  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:13.625697  549077 default_sa.go:45] found service account: "default"
	I1205 19:21:13.625716  549077 default_sa.go:55] duration metric: took 192.568863ms for default service account to be created ...
	I1205 19:21:13.625725  549077 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 19:21:13.821886  549077 request.go:632] Waited for 196.082261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:21:13.821977  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:21:13.821988  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:13.821997  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:13.822001  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:13.828461  549077 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 19:21:13.834834  549077 system_pods.go:86] 17 kube-system pods found
	I1205 19:21:13.834869  549077 system_pods.go:89] "coredns-7c65d6cfc9-45m77" [88196078-5292-43dc-84b2-dc53af435e5c] Running
	I1205 19:21:13.834877  549077 system_pods.go:89] "coredns-7c65d6cfc9-sjsv2" [b686cbc5-1b4f-44ea-89cb-70063b687718] Running
	I1205 19:21:13.834882  549077 system_pods.go:89] "etcd-ha-106302" [b0c81234-5186-4812-a1a2-4f035f9efabf] Running
	I1205 19:21:13.834886  549077 system_pods.go:89] "etcd-ha-106302-m02" [8c619411-697a-4eb0-8725-27811a17aba1] Running
	I1205 19:21:13.834890  549077 system_pods.go:89] "kindnet-thcsp" [e2eec41c-3ca9-42ff-801d-dfdf05f6eab2] Running
	I1205 19:21:13.834894  549077 system_pods.go:89] "kindnet-xr9mh" [2044800c-f517-439e-810b-71a114cb044e] Running
	I1205 19:21:13.834898  549077 system_pods.go:89] "kube-apiserver-ha-106302" [688ddac9-2f42-4e6b-b9e8-a9c967a7180b] Running
	I1205 19:21:13.834901  549077 system_pods.go:89] "kube-apiserver-ha-106302-m02" [ad05d27e-72e0-443e-8ad3-2d464c116f27] Running
	I1205 19:21:13.834905  549077 system_pods.go:89] "kube-controller-manager-ha-106302" [e63c5a4d-c327-4040-b679-62b5b06abec9] Running
	I1205 19:21:13.834909  549077 system_pods.go:89] "kube-controller-manager-ha-106302-m02" [fe707148-d0c6-4de3-841f-3a8143fa9217] Running
	I1205 19:21:13.834912  549077 system_pods.go:89] "kube-proxy-n57lf" [94819792-89fc-4a70-a54f-02e594b657bf] Running
	I1205 19:21:13.834915  549077 system_pods.go:89] "kube-proxy-zw6nj" [d35e1426-9151-4eb3-95fd-c2b36c126b51] Running
	I1205 19:21:13.834919  549077 system_pods.go:89] "kube-scheduler-ha-106302" [6dd32258-0ba3-4f79-8d4b-165b918bbc36] Running
	I1205 19:21:13.834924  549077 system_pods.go:89] "kube-scheduler-ha-106302-m02" [b94b6bf9-4639-47d1-92be-0cbba44e65f3] Running
	I1205 19:21:13.834928  549077 system_pods.go:89] "kube-vip-ha-106302" [03b99453-c78d-4aaf-93e8-7011ae363db4] Running
	I1205 19:21:13.834935  549077 system_pods.go:89] "kube-vip-ha-106302-m02" [2ec94818-bc15-4d60-95b4-e7f7235f0341] Running
	I1205 19:21:13.834939  549077 system_pods.go:89] "storage-provisioner" [88d6e224-b304-4f84-a162-9803400c9acf] Running
	I1205 19:21:13.834946  549077 system_pods.go:126] duration metric: took 209.215629ms to wait for k8s-apps to be running ...
	I1205 19:21:13.834957  549077 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 19:21:13.835009  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:21:13.850235  549077 system_svc.go:56] duration metric: took 15.264777ms WaitForService to wait for kubelet
	I1205 19:21:13.850283  549077 kubeadm.go:582] duration metric: took 23.179247512s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:21:13.850305  549077 node_conditions.go:102] verifying NodePressure condition ...
	I1205 19:21:14.021757  549077 request.go:632] Waited for 171.347316ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes
	I1205 19:21:14.021833  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes
	I1205 19:21:14.021840  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:14.021850  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:14.021860  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:14.026541  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:21:14.027820  549077 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 19:21:14.027846  549077 node_conditions.go:123] node cpu capacity is 2
	I1205 19:21:14.027863  549077 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 19:21:14.027868  549077 node_conditions.go:123] node cpu capacity is 2
	I1205 19:21:14.027874  549077 node_conditions.go:105] duration metric: took 177.564002ms to run NodePressure ...
	I1205 19:21:14.027887  549077 start.go:241] waiting for startup goroutines ...
	I1205 19:21:14.027919  549077 start.go:255] writing updated cluster config ...
	I1205 19:21:14.029921  549077 out.go:201] 
	I1205 19:21:14.031474  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:21:14.031571  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:21:14.033173  549077 out.go:177] * Starting "ha-106302-m03" control-plane node in "ha-106302" cluster
	I1205 19:21:14.034362  549077 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:21:14.034386  549077 cache.go:56] Caching tarball of preloaded images
	I1205 19:21:14.034498  549077 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 19:21:14.034514  549077 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 19:21:14.034605  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:21:14.034796  549077 start.go:360] acquireMachinesLock for ha-106302-m03: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 19:21:14.034842  549077 start.go:364] duration metric: took 26.337µs to acquireMachinesLock for "ha-106302-m03"
	I1205 19:21:14.034860  549077 start.go:93] Provisioning new machine with config: &{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:21:14.034960  549077 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1205 19:21:14.036589  549077 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 19:21:14.036698  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:21:14.036753  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:21:14.052449  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36769
	I1205 19:21:14.052905  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:21:14.053431  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:21:14.053458  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:21:14.053758  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:21:14.053945  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetMachineName
	I1205 19:21:14.054107  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:14.054258  549077 start.go:159] libmachine.API.Create for "ha-106302" (driver="kvm2")
	I1205 19:21:14.054297  549077 client.go:168] LocalClient.Create starting
	I1205 19:21:14.054348  549077 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem
	I1205 19:21:14.054391  549077 main.go:141] libmachine: Decoding PEM data...
	I1205 19:21:14.054413  549077 main.go:141] libmachine: Parsing certificate...
	I1205 19:21:14.054484  549077 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem
	I1205 19:21:14.054515  549077 main.go:141] libmachine: Decoding PEM data...
	I1205 19:21:14.054536  549077 main.go:141] libmachine: Parsing certificate...
	I1205 19:21:14.054563  549077 main.go:141] libmachine: Running pre-create checks...
	I1205 19:21:14.054575  549077 main.go:141] libmachine: (ha-106302-m03) Calling .PreCreateCheck
	I1205 19:21:14.054725  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetConfigRaw
	I1205 19:21:14.055103  549077 main.go:141] libmachine: Creating machine...
	I1205 19:21:14.055117  549077 main.go:141] libmachine: (ha-106302-m03) Calling .Create
	I1205 19:21:14.055267  549077 main.go:141] libmachine: (ha-106302-m03) Creating KVM machine...
	I1205 19:21:14.056572  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found existing default KVM network
	I1205 19:21:14.056653  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found existing private KVM network mk-ha-106302
	I1205 19:21:14.056780  549077 main.go:141] libmachine: (ha-106302-m03) Setting up store path in /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03 ...
	I1205 19:21:14.056804  549077 main.go:141] libmachine: (ha-106302-m03) Building disk image from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 19:21:14.056850  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:14.056773  549869 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:21:14.056935  549077 main.go:141] libmachine: (ha-106302-m03) Downloading /home/jenkins/minikube-integration/20052-530897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 19:21:14.349600  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:14.349456  549869 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/id_rsa...
	I1205 19:21:14.429525  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:14.429393  549869 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/ha-106302-m03.rawdisk...
	I1205 19:21:14.429558  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Writing magic tar header
	I1205 19:21:14.429573  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Writing SSH key tar header
	I1205 19:21:14.429586  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:14.429511  549869 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03 ...
	I1205 19:21:14.429599  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03
	I1205 19:21:14.429612  549077 main.go:141] libmachine: (ha-106302-m03) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03 (perms=drwx------)
	I1205 19:21:14.429633  549077 main.go:141] libmachine: (ha-106302-m03) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines (perms=drwxr-xr-x)
	I1205 19:21:14.429648  549077 main.go:141] libmachine: (ha-106302-m03) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube (perms=drwxr-xr-x)
	I1205 19:21:14.429664  549077 main.go:141] libmachine: (ha-106302-m03) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897 (perms=drwxrwxr-x)
	I1205 19:21:14.429734  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines
	I1205 19:21:14.429769  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:21:14.429779  549077 main.go:141] libmachine: (ha-106302-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 19:21:14.429798  549077 main.go:141] libmachine: (ha-106302-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 19:21:14.429808  549077 main.go:141] libmachine: (ha-106302-m03) Creating domain...
	I1205 19:21:14.429823  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897
	I1205 19:21:14.429833  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 19:21:14.429861  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Checking permissions on dir: /home/jenkins
	I1205 19:21:14.429878  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Checking permissions on dir: /home
	I1205 19:21:14.429910  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Skipping /home - not owner
	I1205 19:21:14.430728  549077 main.go:141] libmachine: (ha-106302-m03) define libvirt domain using xml: 
	I1205 19:21:14.430737  549077 main.go:141] libmachine: (ha-106302-m03) <domain type='kvm'>
	I1205 19:21:14.430743  549077 main.go:141] libmachine: (ha-106302-m03)   <name>ha-106302-m03</name>
	I1205 19:21:14.430748  549077 main.go:141] libmachine: (ha-106302-m03)   <memory unit='MiB'>2200</memory>
	I1205 19:21:14.430753  549077 main.go:141] libmachine: (ha-106302-m03)   <vcpu>2</vcpu>
	I1205 19:21:14.430758  549077 main.go:141] libmachine: (ha-106302-m03)   <features>
	I1205 19:21:14.430762  549077 main.go:141] libmachine: (ha-106302-m03)     <acpi/>
	I1205 19:21:14.430769  549077 main.go:141] libmachine: (ha-106302-m03)     <apic/>
	I1205 19:21:14.430774  549077 main.go:141] libmachine: (ha-106302-m03)     <pae/>
	I1205 19:21:14.430778  549077 main.go:141] libmachine: (ha-106302-m03)     
	I1205 19:21:14.430783  549077 main.go:141] libmachine: (ha-106302-m03)   </features>
	I1205 19:21:14.430790  549077 main.go:141] libmachine: (ha-106302-m03)   <cpu mode='host-passthrough'>
	I1205 19:21:14.430795  549077 main.go:141] libmachine: (ha-106302-m03)   
	I1205 19:21:14.430801  549077 main.go:141] libmachine: (ha-106302-m03)   </cpu>
	I1205 19:21:14.430806  549077 main.go:141] libmachine: (ha-106302-m03)   <os>
	I1205 19:21:14.430811  549077 main.go:141] libmachine: (ha-106302-m03)     <type>hvm</type>
	I1205 19:21:14.430816  549077 main.go:141] libmachine: (ha-106302-m03)     <boot dev='cdrom'/>
	I1205 19:21:14.430823  549077 main.go:141] libmachine: (ha-106302-m03)     <boot dev='hd'/>
	I1205 19:21:14.430849  549077 main.go:141] libmachine: (ha-106302-m03)     <bootmenu enable='no'/>
	I1205 19:21:14.430873  549077 main.go:141] libmachine: (ha-106302-m03)   </os>
	I1205 19:21:14.430884  549077 main.go:141] libmachine: (ha-106302-m03)   <devices>
	I1205 19:21:14.430900  549077 main.go:141] libmachine: (ha-106302-m03)     <disk type='file' device='cdrom'>
	I1205 19:21:14.430917  549077 main.go:141] libmachine: (ha-106302-m03)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/boot2docker.iso'/>
	I1205 19:21:14.430928  549077 main.go:141] libmachine: (ha-106302-m03)       <target dev='hdc' bus='scsi'/>
	I1205 19:21:14.430936  549077 main.go:141] libmachine: (ha-106302-m03)       <readonly/>
	I1205 19:21:14.430944  549077 main.go:141] libmachine: (ha-106302-m03)     </disk>
	I1205 19:21:14.430951  549077 main.go:141] libmachine: (ha-106302-m03)     <disk type='file' device='disk'>
	I1205 19:21:14.430963  549077 main.go:141] libmachine: (ha-106302-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 19:21:14.431003  549077 main.go:141] libmachine: (ha-106302-m03)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/ha-106302-m03.rawdisk'/>
	I1205 19:21:14.431029  549077 main.go:141] libmachine: (ha-106302-m03)       <target dev='hda' bus='virtio'/>
	I1205 19:21:14.431041  549077 main.go:141] libmachine: (ha-106302-m03)     </disk>
	I1205 19:21:14.431052  549077 main.go:141] libmachine: (ha-106302-m03)     <interface type='network'>
	I1205 19:21:14.431065  549077 main.go:141] libmachine: (ha-106302-m03)       <source network='mk-ha-106302'/>
	I1205 19:21:14.431075  549077 main.go:141] libmachine: (ha-106302-m03)       <model type='virtio'/>
	I1205 19:21:14.431084  549077 main.go:141] libmachine: (ha-106302-m03)     </interface>
	I1205 19:21:14.431096  549077 main.go:141] libmachine: (ha-106302-m03)     <interface type='network'>
	I1205 19:21:14.431107  549077 main.go:141] libmachine: (ha-106302-m03)       <source network='default'/>
	I1205 19:21:14.431122  549077 main.go:141] libmachine: (ha-106302-m03)       <model type='virtio'/>
	I1205 19:21:14.431134  549077 main.go:141] libmachine: (ha-106302-m03)     </interface>
	I1205 19:21:14.431143  549077 main.go:141] libmachine: (ha-106302-m03)     <serial type='pty'>
	I1205 19:21:14.431151  549077 main.go:141] libmachine: (ha-106302-m03)       <target port='0'/>
	I1205 19:21:14.431161  549077 main.go:141] libmachine: (ha-106302-m03)     </serial>
	I1205 19:21:14.431168  549077 main.go:141] libmachine: (ha-106302-m03)     <console type='pty'>
	I1205 19:21:14.431178  549077 main.go:141] libmachine: (ha-106302-m03)       <target type='serial' port='0'/>
	I1205 19:21:14.431186  549077 main.go:141] libmachine: (ha-106302-m03)     </console>
	I1205 19:21:14.431201  549077 main.go:141] libmachine: (ha-106302-m03)     <rng model='virtio'>
	I1205 19:21:14.431213  549077 main.go:141] libmachine: (ha-106302-m03)       <backend model='random'>/dev/random</backend>
	I1205 19:21:14.431223  549077 main.go:141] libmachine: (ha-106302-m03)     </rng>
	I1205 19:21:14.431230  549077 main.go:141] libmachine: (ha-106302-m03)     
	I1205 19:21:14.431248  549077 main.go:141] libmachine: (ha-106302-m03)     
	I1205 19:21:14.431260  549077 main.go:141] libmachine: (ha-106302-m03)   </devices>
	I1205 19:21:14.431266  549077 main.go:141] libmachine: (ha-106302-m03) </domain>
	I1205 19:21:14.431276  549077 main.go:141] libmachine: (ha-106302-m03) 
	I1205 19:21:14.438494  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:19:ce:fd in network default
	I1205 19:21:14.439230  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:14.439249  549077 main.go:141] libmachine: (ha-106302-m03) Ensuring networks are active...
	I1205 19:21:14.440093  549077 main.go:141] libmachine: (ha-106302-m03) Ensuring network default is active
	I1205 19:21:14.440381  549077 main.go:141] libmachine: (ha-106302-m03) Ensuring network mk-ha-106302 is active
	I1205 19:21:14.440705  549077 main.go:141] libmachine: (ha-106302-m03) Getting domain xml...
	I1205 19:21:14.441404  549077 main.go:141] libmachine: (ha-106302-m03) Creating domain...
	I1205 19:21:15.693271  549077 main.go:141] libmachine: (ha-106302-m03) Waiting to get IP...
	I1205 19:21:15.694143  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:15.694577  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:15.694598  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:15.694548  549869 retry.go:31] will retry after 242.776885ms: waiting for machine to come up
	I1205 19:21:15.939062  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:15.939524  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:15.939551  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:15.939479  549869 retry.go:31] will retry after 378.968491ms: waiting for machine to come up
	I1205 19:21:16.320454  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:16.320979  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:16.321027  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:16.320939  549869 retry.go:31] will retry after 344.418245ms: waiting for machine to come up
	I1205 19:21:16.667478  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:16.667854  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:16.667886  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:16.667793  549869 retry.go:31] will retry after 423.913988ms: waiting for machine to come up
	I1205 19:21:17.093467  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:17.093883  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:17.093914  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:17.093826  549869 retry.go:31] will retry after 515.714654ms: waiting for machine to come up
	I1205 19:21:17.611140  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:17.611460  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:17.611485  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:17.611417  549869 retry.go:31] will retry after 696.033751ms: waiting for machine to come up
	I1205 19:21:18.308904  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:18.309411  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:18.309441  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:18.309369  549869 retry.go:31] will retry after 785.032938ms: waiting for machine to come up
	I1205 19:21:19.095780  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:19.096341  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:19.096368  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:19.096298  549869 retry.go:31] will retry after 896.435978ms: waiting for machine to come up
	I1205 19:21:19.994107  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:19.994555  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:19.994578  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:19.994515  549869 retry.go:31] will retry after 1.855664433s: waiting for machine to come up
	I1205 19:21:21.852199  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:21.852746  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:21.852782  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:21.852681  549869 retry.go:31] will retry after 1.846119751s: waiting for machine to come up
	I1205 19:21:23.701581  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:23.702157  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:23.702188  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:23.702108  549869 retry.go:31] will retry after 2.613135019s: waiting for machine to come up
	I1205 19:21:26.317749  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:26.318296  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:26.318317  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:26.318258  549869 retry.go:31] will retry after 3.299144229s: waiting for machine to come up
	I1205 19:21:29.618947  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:29.619445  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:29.619480  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:29.619393  549869 retry.go:31] will retry after 3.447245355s: waiting for machine to come up
	I1205 19:21:33.071166  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:33.071564  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:33.071595  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:33.071509  549869 retry.go:31] will retry after 3.459206484s: waiting for machine to come up
	I1205 19:21:36.533492  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.533999  549077 main.go:141] libmachine: (ha-106302-m03) Found IP for machine: 192.168.39.151
	I1205 19:21:36.534029  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has current primary IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.534063  549077 main.go:141] libmachine: (ha-106302-m03) Reserving static IP address...
	I1205 19:21:36.534590  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find host DHCP lease matching {name: "ha-106302-m03", mac: "52:54:00:e6:65:e2", ip: "192.168.39.151"} in network mk-ha-106302
	I1205 19:21:36.616736  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Getting to WaitForSSH function...
	I1205 19:21:36.616827  549077 main.go:141] libmachine: (ha-106302-m03) Reserved static IP address: 192.168.39.151
	I1205 19:21:36.616852  549077 main.go:141] libmachine: (ha-106302-m03) Waiting for SSH to be available...
	I1205 19:21:36.619362  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.620041  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:36.620071  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.620207  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Using SSH client type: external
	I1205 19:21:36.620243  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/id_rsa (-rw-------)
	I1205 19:21:36.620289  549077 main.go:141] libmachine: (ha-106302-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 19:21:36.620307  549077 main.go:141] libmachine: (ha-106302-m03) DBG | About to run SSH command:
	I1205 19:21:36.620323  549077 main.go:141] libmachine: (ha-106302-m03) DBG | exit 0
	I1205 19:21:36.748331  549077 main.go:141] libmachine: (ha-106302-m03) DBG | SSH cmd err, output: <nil>: 
	I1205 19:21:36.748638  549077 main.go:141] libmachine: (ha-106302-m03) KVM machine creation complete!
	I1205 19:21:36.748951  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetConfigRaw
	I1205 19:21:36.749696  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:36.749899  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:36.750158  549077 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 19:21:36.750177  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetState
	I1205 19:21:36.751459  549077 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 19:21:36.751496  549077 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 19:21:36.751505  549077 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 19:21:36.751516  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:36.753721  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.754147  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:36.754180  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.754321  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:36.754488  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:36.754635  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:36.754782  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:36.754931  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:21:36.755238  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.151 22 <nil> <nil>}
	I1205 19:21:36.755253  549077 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 19:21:36.859924  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:21:36.859961  549077 main.go:141] libmachine: Detecting the provisioner...
	I1205 19:21:36.859974  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:36.864316  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.864691  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:36.864716  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.864886  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:36.865081  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:36.865227  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:36.865363  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:36.865505  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:21:36.865742  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.151 22 <nil> <nil>}
	I1205 19:21:36.865757  549077 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 19:21:36.969493  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 19:21:36.969588  549077 main.go:141] libmachine: found compatible host: buildroot
	I1205 19:21:36.969602  549077 main.go:141] libmachine: Provisioning with buildroot...
	I1205 19:21:36.969613  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetMachineName
	I1205 19:21:36.969955  549077 buildroot.go:166] provisioning hostname "ha-106302-m03"
	I1205 19:21:36.969984  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetMachineName
	I1205 19:21:36.970178  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:36.972856  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.973248  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:36.973275  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.973447  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:36.973641  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:36.973807  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:36.973971  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:36.974182  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:21:36.974409  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.151 22 <nil> <nil>}
	I1205 19:21:36.974424  549077 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-106302-m03 && echo "ha-106302-m03" | sudo tee /etc/hostname
	I1205 19:21:37.091631  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-106302-m03
	
	I1205 19:21:37.091670  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:37.095049  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.095508  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.095538  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.095711  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:37.095892  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.096106  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.096340  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:37.096575  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:21:37.096743  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.151 22 <nil> <nil>}
	I1205 19:21:37.096759  549077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-106302-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-106302-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-106302-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:21:37.210648  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:21:37.210686  549077 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 19:21:37.210703  549077 buildroot.go:174] setting up certificates
	I1205 19:21:37.210719  549077 provision.go:84] configureAuth start
	I1205 19:21:37.210728  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetMachineName
	I1205 19:21:37.211084  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetIP
	I1205 19:21:37.214307  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.214777  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.214811  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.214993  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:37.217609  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.218026  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.218059  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.218357  549077 provision.go:143] copyHostCerts
	I1205 19:21:37.218397  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:21:37.218443  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 19:21:37.218457  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:21:37.218538  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 19:21:37.218640  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:21:37.218667  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 19:21:37.218672  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:21:37.218707  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 19:21:37.218773  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:21:37.218800  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 19:21:37.218810  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:21:37.218844  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 19:21:37.218931  549077 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.ha-106302-m03 san=[127.0.0.1 192.168.39.151 ha-106302-m03 localhost minikube]
	I1205 19:21:37.343754  549077 provision.go:177] copyRemoteCerts
	I1205 19:21:37.343819  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:21:37.343847  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:37.346846  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.347219  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.347248  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.347438  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:37.347639  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.347948  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:37.348134  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/id_rsa Username:docker}
	I1205 19:21:37.432798  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 19:21:37.432880  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 19:21:37.459881  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 19:21:37.459950  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 19:21:37.486599  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 19:21:37.486685  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 19:21:37.511864  549077 provision.go:87] duration metric: took 301.129005ms to configureAuth
	I1205 19:21:37.511899  549077 buildroot.go:189] setting minikube options for container-runtime
	I1205 19:21:37.512151  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:21:37.512247  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:37.515413  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.515827  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.515873  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.516082  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:37.516362  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.516553  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.516696  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:37.516848  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:21:37.517021  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.151 22 <nil> <nil>}
	I1205 19:21:37.517041  549077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:21:37.766182  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:21:37.766214  549077 main.go:141] libmachine: Checking connection to Docker...
	I1205 19:21:37.766223  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetURL
	I1205 19:21:37.767491  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Using libvirt version 6000000
	I1205 19:21:37.770234  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.770645  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.770683  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.770820  549077 main.go:141] libmachine: Docker is up and running!
	I1205 19:21:37.770836  549077 main.go:141] libmachine: Reticulating splines...
	I1205 19:21:37.770844  549077 client.go:171] duration metric: took 23.716534789s to LocalClient.Create
	I1205 19:21:37.770869  549077 start.go:167] duration metric: took 23.716613038s to libmachine.API.Create "ha-106302"
	I1205 19:21:37.770879  549077 start.go:293] postStartSetup for "ha-106302-m03" (driver="kvm2")
	I1205 19:21:37.770890  549077 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:21:37.770909  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:37.771260  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:21:37.771293  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:37.773751  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.774322  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.774351  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.774623  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:37.774898  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.775132  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:37.775318  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/id_rsa Username:docker}
	I1205 19:21:37.864963  549077 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:21:37.869224  549077 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 19:21:37.869250  549077 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 19:21:37.869346  549077 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 19:21:37.869450  549077 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 19:21:37.869464  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /etc/ssl/certs/5381862.pem
	I1205 19:21:37.869572  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 19:21:37.878920  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:21:37.904695  549077 start.go:296] duration metric: took 133.797994ms for postStartSetup
	I1205 19:21:37.904759  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetConfigRaw
	I1205 19:21:37.905447  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetIP
	I1205 19:21:37.908301  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.908672  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.908702  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.908956  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:21:37.909156  549077 start.go:128] duration metric: took 23.874183503s to createHost
	I1205 19:21:37.909187  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:37.911450  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.911786  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.911820  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.911891  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:37.912073  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.912217  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.912383  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:37.912551  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:21:37.912721  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.151 22 <nil> <nil>}
	I1205 19:21:37.912731  549077 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 19:21:38.013720  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733426497.965708253
	
	I1205 19:21:38.013754  549077 fix.go:216] guest clock: 1733426497.965708253
	I1205 19:21:38.013766  549077 fix.go:229] Guest: 2024-12-05 19:21:37.965708253 +0000 UTC Remote: 2024-12-05 19:21:37.909171964 +0000 UTC m=+152.282908362 (delta=56.536289ms)
	I1205 19:21:38.013790  549077 fix.go:200] guest clock delta is within tolerance: 56.536289ms
	I1205 19:21:38.013799  549077 start.go:83] releasing machines lock for "ha-106302-m03", held for 23.978946471s
	I1205 19:21:38.013827  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:38.014134  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetIP
	I1205 19:21:38.016789  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:38.017218  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:38.017243  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:38.019529  549077 out.go:177] * Found network options:
	I1205 19:21:38.020846  549077 out.go:177]   - NO_PROXY=192.168.39.185,192.168.39.22
	W1205 19:21:38.022010  549077 proxy.go:119] fail to check proxy env: Error ip not in block
	W1205 19:21:38.022031  549077 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 19:21:38.022044  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:38.022565  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:38.022780  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:38.022889  549077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:21:38.022930  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	W1205 19:21:38.022997  549077 proxy.go:119] fail to check proxy env: Error ip not in block
	W1205 19:21:38.023035  549077 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 19:21:38.023141  549077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:21:38.023159  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:38.025672  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:38.025960  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:38.026079  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:38.026109  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:38.026225  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:38.026344  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:38.026368  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:38.026432  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:38.026548  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:38.026555  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:38.026676  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:38.026727  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/id_rsa Username:docker}
	I1205 19:21:38.026820  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:38.026963  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/id_rsa Username:docker}
	I1205 19:21:38.262374  549077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 19:21:38.269119  549077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 19:21:38.269192  549077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:21:38.288736  549077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 19:21:38.288773  549077 start.go:495] detecting cgroup driver to use...
	I1205 19:21:38.288918  549077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:21:38.308145  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:21:38.324419  549077 docker.go:217] disabling cri-docker service (if available) ...
	I1205 19:21:38.324486  549077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:21:38.340495  549077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:21:38.356196  549077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:21:38.499051  549077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:21:38.664170  549077 docker.go:233] disabling docker service ...
	I1205 19:21:38.664261  549077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:21:38.679720  549077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:21:38.693887  549077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:21:38.835246  549077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:21:38.967777  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:21:38.984739  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:21:39.005139  549077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 19:21:39.005219  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:21:39.018668  549077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:21:39.018748  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:21:39.030582  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:21:39.042783  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:21:39.055956  549077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:21:39.068121  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:21:39.079421  549077 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:21:39.099262  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:21:39.112188  549077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:21:39.123835  549077 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 19:21:39.123897  549077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 19:21:39.142980  549077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:21:39.158784  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:21:39.282396  549077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:21:39.381886  549077 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:21:39.381979  549077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:21:39.387103  549077 start.go:563] Will wait 60s for crictl version
	I1205 19:21:39.387165  549077 ssh_runner.go:195] Run: which crictl
	I1205 19:21:39.391338  549077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:21:39.433516  549077 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 19:21:39.433618  549077 ssh_runner.go:195] Run: crio --version
	I1205 19:21:39.463442  549077 ssh_runner.go:195] Run: crio --version
	I1205 19:21:39.493740  549077 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 19:21:39.495019  549077 out.go:177]   - env NO_PROXY=192.168.39.185
	I1205 19:21:39.496240  549077 out.go:177]   - env NO_PROXY=192.168.39.185,192.168.39.22
	I1205 19:21:39.497508  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetIP
	I1205 19:21:39.500359  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:39.500726  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:39.500755  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:39.500911  549077 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 19:21:39.505557  549077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:21:39.519317  549077 mustload.go:65] Loading cluster: ha-106302
	I1205 19:21:39.519614  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:21:39.519880  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:21:39.519923  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:21:39.535653  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45847
	I1205 19:21:39.536186  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:21:39.536801  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:21:39.536826  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:21:39.537227  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:21:39.537444  549077 main.go:141] libmachine: (ha-106302) Calling .GetState
	I1205 19:21:39.538986  549077 host.go:66] Checking if "ha-106302" exists ...
	I1205 19:21:39.539332  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:21:39.539371  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:21:39.555429  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40715
	I1205 19:21:39.555999  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:21:39.556560  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:21:39.556589  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:21:39.556932  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:21:39.557156  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:21:39.557335  549077 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302 for IP: 192.168.39.151
	I1205 19:21:39.557356  549077 certs.go:194] generating shared ca certs ...
	I1205 19:21:39.557390  549077 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:21:39.557557  549077 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 19:21:39.557617  549077 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 19:21:39.557630  549077 certs.go:256] generating profile certs ...
	I1205 19:21:39.557734  549077 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key
	I1205 19:21:39.557771  549077 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.2331ea85
	I1205 19:21:39.557795  549077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.2331ea85 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.185 192.168.39.22 192.168.39.151 192.168.39.254]
	I1205 19:21:39.646088  549077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.2331ea85 ...
	I1205 19:21:39.646122  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.2331ea85: {Name:mkca6986931a87aa8d4bcffb8b1ac6412a83db65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:21:39.646289  549077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.2331ea85 ...
	I1205 19:21:39.646301  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.2331ea85: {Name:mke7f657c575646b15413aa5e5525c127a73d588 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:21:39.646374  549077 certs.go:381] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.2331ea85 -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt
	I1205 19:21:39.646516  549077 certs.go:385] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.2331ea85 -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key
	I1205 19:21:39.646682  549077 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key
	I1205 19:21:39.646703  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 19:21:39.646737  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 19:21:39.646758  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 19:21:39.646775  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 19:21:39.646792  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 19:21:39.646808  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 19:21:39.646827  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 19:21:39.660323  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 19:21:39.660454  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 19:21:39.660507  549077 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 19:21:39.660523  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 19:21:39.660561  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 19:21:39.660595  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:21:39.660628  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 19:21:39.660684  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:21:39.660725  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem -> /usr/share/ca-certificates/538186.pem
	I1205 19:21:39.660748  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /usr/share/ca-certificates/5381862.pem
	I1205 19:21:39.660768  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:21:39.660816  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:21:39.664340  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:21:39.664849  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:21:39.664879  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:21:39.665165  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:21:39.665411  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:21:39.665607  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:21:39.665765  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:21:39.748651  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1205 19:21:39.754014  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1205 19:21:39.766062  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1205 19:21:39.771674  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1205 19:21:39.784618  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1205 19:21:39.789041  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1205 19:21:39.802785  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1205 19:21:39.808595  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1205 19:21:39.822597  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1205 19:21:39.827169  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1205 19:21:39.839924  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1205 19:21:39.844630  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1205 19:21:39.865166  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:21:39.890669  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 19:21:39.914805  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:21:39.938866  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 19:21:39.964041  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1205 19:21:39.989973  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 19:21:40.017414  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:21:40.042496  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 19:21:40.067448  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 19:21:40.092444  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 19:21:40.118324  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:21:40.144679  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1205 19:21:40.162124  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1205 19:21:40.178895  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1205 19:21:40.196614  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1205 19:21:40.216743  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1205 19:21:40.236796  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1205 19:21:40.255368  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1205 19:21:40.272767  549077 ssh_runner.go:195] Run: openssl version
	I1205 19:21:40.279013  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 19:21:40.291865  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 19:21:40.297901  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 19:21:40.297969  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 19:21:40.305022  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 19:21:40.317671  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 19:21:40.330059  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 19:21:40.335215  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 19:21:40.335291  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 19:21:40.341648  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 19:21:40.353809  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:21:40.366241  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:21:40.371103  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:21:40.371178  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:21:40.377410  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:21:40.389484  549077 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 19:21:40.394089  549077 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 19:21:40.394159  549077 kubeadm.go:934] updating node {m03 192.168.39.151 8443 v1.31.2 crio true true} ...
	I1205 19:21:40.394281  549077 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-106302-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 19:21:40.394312  549077 kube-vip.go:115] generating kube-vip config ...
	I1205 19:21:40.394383  549077 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 19:21:40.412017  549077 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 19:21:40.412099  549077 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1205 19:21:40.412152  549077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 19:21:40.422903  549077 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1205 19:21:40.422982  549077 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1205 19:21:40.433537  549077 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1205 19:21:40.433551  549077 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1205 19:21:40.433572  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 19:21:40.433606  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:21:40.433603  549077 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1205 19:21:40.433634  549077 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 19:21:40.433638  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 19:21:40.433701  549077 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 19:21:40.452070  549077 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1205 19:21:40.452102  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 19:21:40.452118  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1205 19:21:40.452167  549077 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1205 19:21:40.452196  549077 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 19:21:40.452198  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1205 19:21:40.481457  549077 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1205 19:21:40.481500  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1205 19:21:41.411979  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1205 19:21:41.422976  549077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1205 19:21:41.442199  549077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 19:21:41.460832  549077 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1205 19:21:41.479070  549077 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 19:21:41.483375  549077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:21:41.497066  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:21:41.622952  549077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:21:41.643215  549077 host.go:66] Checking if "ha-106302" exists ...
	I1205 19:21:41.643585  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:21:41.643643  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:21:41.660142  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39403
	I1205 19:21:41.660811  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:21:41.661472  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:21:41.661507  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:21:41.661908  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:21:41.662156  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:21:41.663022  549077 start.go:317] joinCluster: &{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:21:41.663207  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1205 19:21:41.663239  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:21:41.666973  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:21:41.667413  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:21:41.667445  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:21:41.667629  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:21:41.667805  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:21:41.667958  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:21:41.668092  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:21:41.845827  549077 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:21:41.845894  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bitrl5.l9o7pcy69k2x0m8f --discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-106302-m03 --control-plane --apiserver-advertise-address=192.168.39.151 --apiserver-bind-port=8443"
	I1205 19:22:05.091694  549077 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bitrl5.l9o7pcy69k2x0m8f --discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-106302-m03 --control-plane --apiserver-advertise-address=192.168.39.151 --apiserver-bind-port=8443": (23.245742289s)
	I1205 19:22:05.091745  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1205 19:22:05.651069  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-106302-m03 minikube.k8s.io/updated_at=2024_12_05T19_22_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=ha-106302 minikube.k8s.io/primary=false
	I1205 19:22:05.805746  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-106302-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1205 19:22:05.942387  549077 start.go:319] duration metric: took 24.279360239s to joinCluster
	I1205 19:22:05.942527  549077 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:22:05.942909  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:22:05.943936  549077 out.go:177] * Verifying Kubernetes components...
	I1205 19:22:05.945223  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:22:06.284991  549077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:22:06.343812  549077 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:22:06.344263  549077 kapi.go:59] client config for ha-106302: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.crt", KeyFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key", CAFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1205 19:22:06.344398  549077 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.185:8443
	I1205 19:22:06.344797  549077 node_ready.go:35] waiting up to 6m0s for node "ha-106302-m03" to be "Ready" ...
	I1205 19:22:06.344937  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:06.344951  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:06.344962  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:06.344969  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:06.358416  549077 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1205 19:22:06.845609  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:06.845637  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:06.845650  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:06.845657  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:06.850140  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:07.345201  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:07.345229  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:07.345238  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:07.345242  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:07.349137  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:07.845591  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:07.845615  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:07.845624  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:07.845628  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:07.849417  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:08.345109  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:08.345139  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:08.345151  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:08.345155  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:08.349617  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:08.350266  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:08.845598  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:08.845626  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:08.845638  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:08.845643  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:08.849144  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:09.345621  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:09.345646  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:09.345656  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:09.345660  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:09.349983  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:09.845757  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:09.845782  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:09.845790  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:09.845794  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:09.849681  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:10.345604  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:10.345635  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:10.345648  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:10.345654  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:10.349727  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:10.350478  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:10.845342  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:10.845367  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:10.845376  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:10.845381  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:10.848990  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:11.346073  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:11.346097  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:11.346105  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:11.346109  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:11.350613  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:11.845378  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:11.845411  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:11.845426  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:11.845434  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:11.849253  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:12.345303  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:12.345337  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:12.345349  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:12.345358  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:12.352355  549077 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 19:22:12.353182  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:12.845552  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:12.845581  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:12.845591  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:12.845595  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:12.849732  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:13.345587  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:13.345613  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:13.345623  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:13.345629  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:13.349259  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:13.845165  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:13.845197  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:13.845209  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:13.845214  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:13.849815  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:14.345423  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:14.345458  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:14.345471  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:14.345480  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:14.353042  549077 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1205 19:22:14.353960  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:14.845215  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:14.845239  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:14.845248  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:14.845252  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:14.848681  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:15.345651  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:15.345681  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:15.345699  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:15.345706  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:15.349604  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:15.845599  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:15.845627  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:15.845637  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:15.845641  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:15.849736  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:16.345974  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:16.346003  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:16.346012  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:16.346017  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:16.350399  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:16.845026  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:16.845057  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:16.845067  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:16.845071  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:16.848713  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:16.849459  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:17.345612  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:17.345660  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:17.345688  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:17.345700  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:17.349461  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:17.845355  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:17.845379  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:17.845388  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:17.845392  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:17.851232  549077 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 19:22:18.346074  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:18.346098  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:18.346107  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:18.346112  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:18.350327  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:18.845241  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:18.845266  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:18.845273  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:18.845277  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:18.848579  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:18.849652  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:19.345480  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:19.345506  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:19.345515  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:19.345519  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:19.349757  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:19.845572  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:19.845597  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:19.845606  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:19.845621  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:19.849116  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:20.345089  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:20.345113  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:20.345121  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:20.345126  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:20.348890  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:20.846039  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:20.846062  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:20.846070  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:20.846075  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:20.850247  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:20.850972  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:21.345329  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:21.345370  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:21.345381  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:21.345387  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:21.349225  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:21.845571  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:21.845604  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:21.845616  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:21.845622  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:21.849183  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:22.345428  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:22.345453  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:22.345461  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:22.345466  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:22.349371  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:22.845510  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:22.845534  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:22.845543  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:22.845549  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:22.849220  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:23.345442  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:23.345470  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:23.345479  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:23.345484  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:23.349347  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:23.350300  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:23.845549  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:23.845574  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:23.845582  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:23.845587  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:23.849893  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:24.345261  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:24.345292  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:24.345302  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:24.345306  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:24.349136  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:24.845545  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:24.845574  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:24.845583  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:24.845586  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:24.849619  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:25.345655  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:25.345687  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.345745  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.345781  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.349427  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:25.350218  549077 node_ready.go:49] node "ha-106302-m03" has status "Ready":"True"
	I1205 19:22:25.350237  549077 node_ready.go:38] duration metric: took 19.005417749s for node "ha-106302-m03" to be "Ready" ...
	I1205 19:22:25.350247  549077 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:22:25.350324  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:22:25.350335  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.350342  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.350347  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.358969  549077 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1205 19:22:25.365676  549077 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-45m77" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.365768  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-45m77
	I1205 19:22:25.365777  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.365785  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.365790  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.369626  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:25.370252  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:25.370268  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.370276  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.370280  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.373604  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:25.374401  549077 pod_ready.go:93] pod "coredns-7c65d6cfc9-45m77" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:25.374417  549077 pod_ready.go:82] duration metric: took 8.712508ms for pod "coredns-7c65d6cfc9-45m77" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.374426  549077 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sjsv2" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.374491  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sjsv2
	I1205 19:22:25.374498  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.374505  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.374510  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.377314  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:22:25.378099  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:25.378115  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.378125  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.378130  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.380745  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:22:25.381330  549077 pod_ready.go:93] pod "coredns-7c65d6cfc9-sjsv2" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:25.381354  549077 pod_ready.go:82] duration metric: took 6.920357ms for pod "coredns-7c65d6cfc9-sjsv2" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.381366  549077 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.381430  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/etcd-ha-106302
	I1205 19:22:25.381437  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.381445  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.381452  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.384565  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:25.385119  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:25.385140  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.385150  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.385156  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.387832  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:22:25.388313  549077 pod_ready.go:93] pod "etcd-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:25.388334  549077 pod_ready.go:82] duration metric: took 6.95931ms for pod "etcd-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.388344  549077 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.388405  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/etcd-ha-106302-m02
	I1205 19:22:25.388413  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.388420  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.388426  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.390958  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:22:25.391627  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:25.391646  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.391657  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.391664  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.394336  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:22:25.394843  549077 pod_ready.go:93] pod "etcd-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:25.394860  549077 pod_ready.go:82] duration metric: took 6.510348ms for pod "etcd-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.394870  549077 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.546322  549077 request.go:632] Waited for 151.362843ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/etcd-ha-106302-m03
	I1205 19:22:25.546441  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/etcd-ha-106302-m03
	I1205 19:22:25.546457  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.546468  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.546478  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.551505  549077 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 19:22:25.746379  549077 request.go:632] Waited for 194.045637ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:25.746447  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:25.746452  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.746460  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.746465  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.749940  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:25.750364  549077 pod_ready.go:93] pod "etcd-ha-106302-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:25.750384  549077 pod_ready.go:82] duration metric: took 355.50711ms for pod "etcd-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.750410  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.945946  549077 request.go:632] Waited for 195.44547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302
	I1205 19:22:25.946012  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302
	I1205 19:22:25.946017  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.946026  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.946031  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.949896  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:26.146187  549077 request.go:632] Waited for 195.303913ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:26.146261  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:26.146266  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:26.146281  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:26.146284  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:26.150155  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:26.150850  549077 pod_ready.go:93] pod "kube-apiserver-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:26.150872  549077 pod_ready.go:82] duration metric: took 400.452175ms for pod "kube-apiserver-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:26.150884  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:26.346018  549077 request.go:632] Waited for 195.032626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302-m02
	I1205 19:22:26.346106  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302-m02
	I1205 19:22:26.346114  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:26.346126  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:26.346134  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:26.350215  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:26.546617  549077 request.go:632] Waited for 195.375501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:26.546704  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:26.546710  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:26.546718  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:26.546722  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:26.550695  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:26.551267  549077 pod_ready.go:93] pod "kube-apiserver-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:26.551288  549077 pod_ready.go:82] duration metric: took 400.395912ms for pod "kube-apiserver-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:26.551301  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:26.746009  549077 request.go:632] Waited for 194.599498ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302-m03
	I1205 19:22:26.746081  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302-m03
	I1205 19:22:26.746088  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:26.746096  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:26.746102  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:26.750448  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:26.945801  549077 request.go:632] Waited for 194.318273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:26.945876  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:26.945882  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:26.945893  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:26.945901  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:26.949211  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:26.949781  549077 pod_ready.go:93] pod "kube-apiserver-ha-106302-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:26.949807  549077 pod_ready.go:82] duration metric: took 398.493465ms for pod "kube-apiserver-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:26.949821  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:27.145762  549077 request.go:632] Waited for 195.843082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302
	I1205 19:22:27.145841  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302
	I1205 19:22:27.145847  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:27.145856  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:27.145863  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:27.150825  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:27.346689  549077 request.go:632] Waited for 195.243035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:27.346772  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:27.346785  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:27.346804  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:27.346815  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:27.350485  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:27.351090  549077 pod_ready.go:93] pod "kube-controller-manager-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:27.351111  549077 pod_ready.go:82] duration metric: took 401.282274ms for pod "kube-controller-manager-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:27.351122  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:27.546113  549077 request.go:632] Waited for 194.908111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302-m02
	I1205 19:22:27.546216  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302-m02
	I1205 19:22:27.546228  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:27.546241  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:27.546255  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:27.550360  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:27.746526  549077 request.go:632] Waited for 195.360331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:27.746617  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:27.746626  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:27.746635  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:27.746640  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:27.753462  549077 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 19:22:27.754708  549077 pod_ready.go:93] pod "kube-controller-manager-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:27.754735  549077 pod_ready.go:82] duration metric: took 403.601936ms for pod "kube-controller-manager-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:27.754750  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:27.945674  549077 request.go:632] Waited for 190.826423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302-m03
	I1205 19:22:27.945746  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302-m03
	I1205 19:22:27.945752  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:27.945760  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:27.945764  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:27.949668  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:28.146444  549077 request.go:632] Waited for 195.387763ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:28.146510  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:28.146515  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:28.146523  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:28.146535  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:28.150750  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:28.151357  549077 pod_ready.go:93] pod "kube-controller-manager-ha-106302-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:28.151381  549077 pod_ready.go:82] duration metric: took 396.622007ms for pod "kube-controller-manager-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:28.151393  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n57lf" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:28.345948  549077 request.go:632] Waited for 194.471828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n57lf
	I1205 19:22:28.346043  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n57lf
	I1205 19:22:28.346051  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:28.346059  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:28.346064  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:28.350114  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:28.546260  549077 request.go:632] Waited for 195.407825ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:28.546369  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:28.546382  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:28.546394  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:28.546413  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:28.551000  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:28.551628  549077 pod_ready.go:93] pod "kube-proxy-n57lf" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:28.551654  549077 pod_ready.go:82] duration metric: took 400.254319ms for pod "kube-proxy-n57lf" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:28.551666  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pghdx" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:28.746587  549077 request.go:632] Waited for 194.82213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pghdx
	I1205 19:22:28.746705  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pghdx
	I1205 19:22:28.746718  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:28.746727  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:28.746737  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:28.750453  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:28.946581  549077 request.go:632] Waited for 195.373436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:28.946682  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:28.946693  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:28.946704  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:28.946714  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:28.949892  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:28.950341  549077 pod_ready.go:93] pod "kube-proxy-pghdx" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:28.950360  549077 pod_ready.go:82] duration metric: took 398.68655ms for pod "kube-proxy-pghdx" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:28.950370  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zw6nj" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:29.145964  549077 request.go:632] Waited for 195.515335ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zw6nj
	I1205 19:22:29.146035  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zw6nj
	I1205 19:22:29.146042  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:29.146052  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:29.146058  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:29.149161  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:29.346356  549077 request.go:632] Waited for 196.408917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:29.346467  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:29.346475  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:29.346505  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:29.346577  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:29.350334  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:29.351251  549077 pod_ready.go:93] pod "kube-proxy-zw6nj" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:29.351290  549077 pod_ready.go:82] duration metric: took 400.913186ms for pod "kube-proxy-zw6nj" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:29.351307  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:29.545602  549077 request.go:632] Waited for 194.210598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302
	I1205 19:22:29.545674  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302
	I1205 19:22:29.545682  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:29.545694  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:29.545705  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:29.549980  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:29.746034  549077 request.go:632] Waited for 195.473431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:29.746121  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:29.746128  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:29.746140  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:29.746148  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:29.750509  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:29.751460  549077 pod_ready.go:93] pod "kube-scheduler-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:29.751481  549077 pod_ready.go:82] duration metric: took 400.162109ms for pod "kube-scheduler-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:29.751493  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:29.946019  549077 request.go:632] Waited for 194.44438ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302-m02
	I1205 19:22:29.946119  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302-m02
	I1205 19:22:29.946131  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:29.946140  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:29.946148  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:29.949224  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:30.146466  549077 request.go:632] Waited for 196.38785ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:30.146542  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:30.146550  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:30.146562  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:30.146575  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:30.150163  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:30.150654  549077 pod_ready.go:93] pod "kube-scheduler-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:30.150677  549077 pod_ready.go:82] duration metric: took 399.174639ms for pod "kube-scheduler-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:30.150688  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:30.346682  549077 request.go:632] Waited for 195.915039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302-m03
	I1205 19:22:30.346759  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302-m03
	I1205 19:22:30.346764  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:30.346773  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:30.346788  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:30.350596  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:30.545763  549077 request.go:632] Waited for 194.297931ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:30.545847  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:30.545854  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:30.545865  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:30.545873  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:30.549623  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:30.550473  549077 pod_ready.go:93] pod "kube-scheduler-ha-106302-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:30.550494  549077 pod_ready.go:82] duration metric: took 399.800176ms for pod "kube-scheduler-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:30.550505  549077 pod_ready.go:39] duration metric: took 5.200248716s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:22:30.550539  549077 api_server.go:52] waiting for apiserver process to appear ...
	I1205 19:22:30.550598  549077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 19:22:30.565872  549077 api_server.go:72] duration metric: took 24.623303746s to wait for apiserver process to appear ...
	I1205 19:22:30.565908  549077 api_server.go:88] waiting for apiserver healthz status ...
	I1205 19:22:30.565931  549077 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I1205 19:22:30.570332  549077 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I1205 19:22:30.570415  549077 round_trippers.go:463] GET https://192.168.39.185:8443/version
	I1205 19:22:30.570426  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:30.570440  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:30.570444  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:30.571545  549077 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:22:30.571615  549077 api_server.go:141] control plane version: v1.31.2
	I1205 19:22:30.571635  549077 api_server.go:131] duration metric: took 5.719204ms to wait for apiserver health ...
	I1205 19:22:30.571664  549077 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 19:22:30.746133  549077 request.go:632] Waited for 174.37713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:22:30.746217  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:22:30.746231  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:30.746244  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:30.746251  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:30.753131  549077 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 19:22:30.760159  549077 system_pods.go:59] 24 kube-system pods found
	I1205 19:22:30.760194  549077 system_pods.go:61] "coredns-7c65d6cfc9-45m77" [88196078-5292-43dc-84b2-dc53af435e5c] Running
	I1205 19:22:30.760202  549077 system_pods.go:61] "coredns-7c65d6cfc9-sjsv2" [b686cbc5-1b4f-44ea-89cb-70063b687718] Running
	I1205 19:22:30.760208  549077 system_pods.go:61] "etcd-ha-106302" [b0c81234-5186-4812-a1a2-4f035f9efabf] Running
	I1205 19:22:30.760214  549077 system_pods.go:61] "etcd-ha-106302-m02" [8c619411-697a-4eb0-8725-27811a17aba1] Running
	I1205 19:22:30.760219  549077 system_pods.go:61] "etcd-ha-106302-m03" [08e9ef91-8e16-4ff1-a2df-8275e72a5697] Running
	I1205 19:22:30.760224  549077 system_pods.go:61] "kindnet-thcsp" [e2eec41c-3ca9-42ff-801d-dfdf05f6eab2] Running
	I1205 19:22:30.760228  549077 system_pods.go:61] "kindnet-wdsv9" [83d82f5d-42c3-47be-af20-41b82c16b114] Running
	I1205 19:22:30.760233  549077 system_pods.go:61] "kindnet-xr9mh" [2044800c-f517-439e-810b-71a114cb044e] Running
	I1205 19:22:30.760238  549077 system_pods.go:61] "kube-apiserver-ha-106302" [688ddac9-2f42-4e6b-b9e8-a9c967a7180b] Running
	I1205 19:22:30.760243  549077 system_pods.go:61] "kube-apiserver-ha-106302-m02" [ad05d27e-72e0-443e-8ad3-2d464c116f27] Running
	I1205 19:22:30.760249  549077 system_pods.go:61] "kube-apiserver-ha-106302-m03" [398242aa-f015-47ca-9132-23412c52878d] Running
	I1205 19:22:30.760254  549077 system_pods.go:61] "kube-controller-manager-ha-106302" [e63c5a4d-c327-4040-b679-62b5b06abec9] Running
	I1205 19:22:30.760259  549077 system_pods.go:61] "kube-controller-manager-ha-106302-m02" [fe707148-d0c6-4de3-841f-3a8143fa9217] Running
	I1205 19:22:30.760288  549077 system_pods.go:61] "kube-controller-manager-ha-106302-m03" [8af17291-c1b7-417f-a2dd-5a00ca58b07e] Running
	I1205 19:22:30.760294  549077 system_pods.go:61] "kube-proxy-n57lf" [94819792-89fc-4a70-a54f-02e594b657bf] Running
	I1205 19:22:30.760300  549077 system_pods.go:61] "kube-proxy-pghdx" [915060a3-353c-4a2c-a9d6-494206776446] Running
	I1205 19:22:30.760306  549077 system_pods.go:61] "kube-proxy-zw6nj" [d35e1426-9151-4eb3-95fd-c2b36c126b51] Running
	I1205 19:22:30.760312  549077 system_pods.go:61] "kube-scheduler-ha-106302" [6dd32258-0ba3-4f79-8d4b-165b918bbc36] Running
	I1205 19:22:30.760321  549077 system_pods.go:61] "kube-scheduler-ha-106302-m02" [b94b6bf9-4639-47d1-92be-0cbba44e65f3] Running
	I1205 19:22:30.760327  549077 system_pods.go:61] "kube-scheduler-ha-106302-m03" [1b601e0c-59c7-4248-b29c-44d19934f590] Running
	I1205 19:22:30.760333  549077 system_pods.go:61] "kube-vip-ha-106302" [03b99453-c78d-4aaf-93e8-7011ae363db4] Running
	I1205 19:22:30.760339  549077 system_pods.go:61] "kube-vip-ha-106302-m02" [2ec94818-bc15-4d60-95b4-e7f7235f0341] Running
	I1205 19:22:30.760347  549077 system_pods.go:61] "kube-vip-ha-106302-m03" [6e511769-148e-43eb-a4bb-6dd72dfcd11d] Running
	I1205 19:22:30.760352  549077 system_pods.go:61] "storage-provisioner" [88d6e224-b304-4f84-a162-9803400c9acf] Running
	I1205 19:22:30.760361  549077 system_pods.go:74] duration metric: took 188.685514ms to wait for pod list to return data ...
	I1205 19:22:30.760375  549077 default_sa.go:34] waiting for default service account to be created ...
	I1205 19:22:30.946070  549077 request.go:632] Waited for 185.595824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/default/serviceaccounts
	I1205 19:22:30.946137  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/default/serviceaccounts
	I1205 19:22:30.946142  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:30.946151  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:30.946159  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:30.950732  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:30.950901  549077 default_sa.go:45] found service account: "default"
	I1205 19:22:30.950919  549077 default_sa.go:55] duration metric: took 190.53748ms for default service account to be created ...
	I1205 19:22:30.950929  549077 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 19:22:31.146374  549077 request.go:632] Waited for 195.332956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:22:31.146437  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:22:31.146443  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:31.146451  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:31.146456  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:31.153763  549077 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1205 19:22:31.160825  549077 system_pods.go:86] 24 kube-system pods found
	I1205 19:22:31.160858  549077 system_pods.go:89] "coredns-7c65d6cfc9-45m77" [88196078-5292-43dc-84b2-dc53af435e5c] Running
	I1205 19:22:31.160865  549077 system_pods.go:89] "coredns-7c65d6cfc9-sjsv2" [b686cbc5-1b4f-44ea-89cb-70063b687718] Running
	I1205 19:22:31.160869  549077 system_pods.go:89] "etcd-ha-106302" [b0c81234-5186-4812-a1a2-4f035f9efabf] Running
	I1205 19:22:31.160874  549077 system_pods.go:89] "etcd-ha-106302-m02" [8c619411-697a-4eb0-8725-27811a17aba1] Running
	I1205 19:22:31.160878  549077 system_pods.go:89] "etcd-ha-106302-m03" [08e9ef91-8e16-4ff1-a2df-8275e72a5697] Running
	I1205 19:22:31.160882  549077 system_pods.go:89] "kindnet-thcsp" [e2eec41c-3ca9-42ff-801d-dfdf05f6eab2] Running
	I1205 19:22:31.160888  549077 system_pods.go:89] "kindnet-wdsv9" [83d82f5d-42c3-47be-af20-41b82c16b114] Running
	I1205 19:22:31.160893  549077 system_pods.go:89] "kindnet-xr9mh" [2044800c-f517-439e-810b-71a114cb044e] Running
	I1205 19:22:31.160900  549077 system_pods.go:89] "kube-apiserver-ha-106302" [688ddac9-2f42-4e6b-b9e8-a9c967a7180b] Running
	I1205 19:22:31.160908  549077 system_pods.go:89] "kube-apiserver-ha-106302-m02" [ad05d27e-72e0-443e-8ad3-2d464c116f27] Running
	I1205 19:22:31.160914  549077 system_pods.go:89] "kube-apiserver-ha-106302-m03" [398242aa-f015-47ca-9132-23412c52878d] Running
	I1205 19:22:31.160925  549077 system_pods.go:89] "kube-controller-manager-ha-106302" [e63c5a4d-c327-4040-b679-62b5b06abec9] Running
	I1205 19:22:31.160931  549077 system_pods.go:89] "kube-controller-manager-ha-106302-m02" [fe707148-d0c6-4de3-841f-3a8143fa9217] Running
	I1205 19:22:31.160937  549077 system_pods.go:89] "kube-controller-manager-ha-106302-m03" [8af17291-c1b7-417f-a2dd-5a00ca58b07e] Running
	I1205 19:22:31.160946  549077 system_pods.go:89] "kube-proxy-n57lf" [94819792-89fc-4a70-a54f-02e594b657bf] Running
	I1205 19:22:31.160950  549077 system_pods.go:89] "kube-proxy-pghdx" [915060a3-353c-4a2c-a9d6-494206776446] Running
	I1205 19:22:31.160956  549077 system_pods.go:89] "kube-proxy-zw6nj" [d35e1426-9151-4eb3-95fd-c2b36c126b51] Running
	I1205 19:22:31.160960  549077 system_pods.go:89] "kube-scheduler-ha-106302" [6dd32258-0ba3-4f79-8d4b-165b918bbc36] Running
	I1205 19:22:31.160970  549077 system_pods.go:89] "kube-scheduler-ha-106302-m02" [b94b6bf9-4639-47d1-92be-0cbba44e65f3] Running
	I1205 19:22:31.160976  549077 system_pods.go:89] "kube-scheduler-ha-106302-m03" [1b601e0c-59c7-4248-b29c-44d19934f590] Running
	I1205 19:22:31.160979  549077 system_pods.go:89] "kube-vip-ha-106302" [03b99453-c78d-4aaf-93e8-7011ae363db4] Running
	I1205 19:22:31.160985  549077 system_pods.go:89] "kube-vip-ha-106302-m02" [2ec94818-bc15-4d60-95b4-e7f7235f0341] Running
	I1205 19:22:31.160989  549077 system_pods.go:89] "kube-vip-ha-106302-m03" [6e511769-148e-43eb-a4bb-6dd72dfcd11d] Running
	I1205 19:22:31.160992  549077 system_pods.go:89] "storage-provisioner" [88d6e224-b304-4f84-a162-9803400c9acf] Running
	I1205 19:22:31.161001  549077 system_pods.go:126] duration metric: took 210.065272ms to wait for k8s-apps to be running ...
	I1205 19:22:31.161014  549077 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 19:22:31.161075  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:22:31.179416  549077 system_svc.go:56] duration metric: took 18.393613ms WaitForService to wait for kubelet
	I1205 19:22:31.179447  549077 kubeadm.go:582] duration metric: took 25.236889217s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:22:31.179468  549077 node_conditions.go:102] verifying NodePressure condition ...
	I1205 19:22:31.345848  549077 request.go:632] Waited for 166.292279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes
	I1205 19:22:31.345915  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes
	I1205 19:22:31.345920  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:31.345937  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:31.345942  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:31.350337  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:31.351373  549077 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 19:22:31.351397  549077 node_conditions.go:123] node cpu capacity is 2
	I1205 19:22:31.351414  549077 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 19:22:31.351420  549077 node_conditions.go:123] node cpu capacity is 2
	I1205 19:22:31.351426  549077 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 19:22:31.351430  549077 node_conditions.go:123] node cpu capacity is 2
	I1205 19:22:31.351436  549077 node_conditions.go:105] duration metric: took 171.962205ms to run NodePressure ...
	I1205 19:22:31.351452  549077 start.go:241] waiting for startup goroutines ...
	I1205 19:22:31.351479  549077 start.go:255] writing updated cluster config ...
	I1205 19:22:31.351794  549077 ssh_runner.go:195] Run: rm -f paused
	I1205 19:22:31.407206  549077 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 19:22:31.410298  549077 out.go:177] * Done! kubectl is now configured to use "ha-106302" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 19:26:12 ha-106302 crio[666]: time="2024-12-05 19:26:12.926866909Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426772926842334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a2f92556-d911-470b-a4bf-b73981ad2b45 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:26:12 ha-106302 crio[666]: time="2024-12-05 19:26:12.927937806Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4aaede36-3448-490b-8e39-29fede978891 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:12 ha-106302 crio[666]: time="2024-12-05 19:26:12.928033454Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4aaede36-3448-490b-8e39-29fede978891 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:12 ha-106302 crio[666]: time="2024-12-05 19:26:12.928344980Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8175779cb574608a2e0e051ddf4963e3b0f7f7b3a0bb6082137a16800a03a08e,PodSandboxId:619925cbc39c69135172b7e76775b358b55fa47d57b5dfe0f03a5194c0692777,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733426557247240128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-p8z47,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16e14c1a-196d-42a8-b245-1a488cb9667f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7af42dff52cf31e3d0b4c5b3bb3039a69b066d99b6f46d065147ba29c75204b,PodSandboxId:95ad32628ed378cf8fe1c9cacc2bc59fc6969dc4a22ed2e11cbc6aa11f389771,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426409026160454,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sjsv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686cbc5-1b4f-44ea-89cb-70063b687718,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71878f2ac51cecfe539f367c2ff49f6bc6b40022a7dff189245bd007d0260d07,PodSandboxId:79783fce24db9824c8762aa0ebc246441d34d9d16f5b46829b9e44cac750e5b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426408724382293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-45m77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
88196078-5292-43dc-84b2-dc53af435e5c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a647561fc8a8150a221a7d9831dde01fe407024d413eda1a607ac294e573764b,PodSandboxId:ba65941872158b7f807f5608fbad458facee98a81f1ec1014ac383579eda3127,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733426408698615726,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d6e224-b304-4f84-a162-9803400c9acf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0e4de270d59927c1fd98dfbfca5bebec8750f72b7682863f1276e5cf4afe0e,PodSandboxId:5f62be7378940215f775ba016eaaba9e085a5bde8d5f3bd2af7af71b2a161ba1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733426396906111541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xr9mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2044800c-f517-439e-810b-71a114cb044e,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013c8063671c4aa3ba3a414d06a2537ce811bcd6e22e028d0ad8ab9af659022d,PodSandboxId:dc8d6361e49728eaa41e23a1d93aa34cfaa625af82fcfa2a884dd3b4f2b81c55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733426392
646389922,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw6nj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d35e1426-9151-4eb3-95fd-c2b36c126b51,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a639bf005af2020a5321599ccc56f99bd4c5be6aa0c227a6310955274ec60e3e,PodSandboxId:3cfec88984b8a0d72e94319ba62e7d4ab919d47ac556a084a2d6737ebd823e2e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173342638480
0708772,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94f9241c16c5e3fb852233a6fe3994b7,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73802addf28ef6b673245e1309d4d82c07c43374f514f1031e2a8277b4641e1a,PodSandboxId:594e9eb586b3236ea16c3700fc2cd0993924c9f7621e0cdde654b8062e9216ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733426381465280845,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44e395bdaa0336ddb64b019178e9d783,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7fcd5f7d56deb9c9698f0941fa3b61d597efc9495ed27488a425d6030baa44,PodSandboxId:c920b14cf50aa8ed9c35f9a67d873d3358f3e00a98649b822dcaf888ea4820e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733426381444138208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6cd909fedaf70356c0cea88a63589f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dec1697264029fa87be97fc70c56ce04eba1e67864a4b1b1f1e47cba052f7cf8,PodSandboxId:411118291d3f33b6d7f7a80f545d0dfdb0f0d3142d4ff4deb2a42c08e68de419,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733426381437294125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7aeab01bb9a2149eedec308e9c9b613,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c251344563e4644b942bcb793dd412b7fae15eefbb4142b68e4047db60a8fbeb,PodSandboxId:890699ae2c7d2cae9c6665fe590a645df186a046d832ec79a134309fabab3c04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733426381376403502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112c68d960b3bd38f8fac52ec570505b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4aaede36-3448-490b-8e39-29fede978891 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:12 ha-106302 crio[666]: time="2024-12-05 19:26:12.970972635Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4734329f-329a-4518-b9f2-47ecd19c66c6 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:26:12 ha-106302 crio[666]: time="2024-12-05 19:26:12.971096630Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4734329f-329a-4518-b9f2-47ecd19c66c6 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:26:12 ha-106302 crio[666]: time="2024-12-05 19:26:12.972628052Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f1778ea6-58bd-4c2b-8ff7-e16cdcf2ad72 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:26:12 ha-106302 crio[666]: time="2024-12-05 19:26:12.973339391Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426772973314058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f1778ea6-58bd-4c2b-8ff7-e16cdcf2ad72 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:26:12 ha-106302 crio[666]: time="2024-12-05 19:26:12.973935843Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ff0c03a-a7ae-4969-a1d2-5f6d147c461e name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:12 ha-106302 crio[666]: time="2024-12-05 19:26:12.974103506Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ff0c03a-a7ae-4969-a1d2-5f6d147c461e name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:12 ha-106302 crio[666]: time="2024-12-05 19:26:12.974393217Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8175779cb574608a2e0e051ddf4963e3b0f7f7b3a0bb6082137a16800a03a08e,PodSandboxId:619925cbc39c69135172b7e76775b358b55fa47d57b5dfe0f03a5194c0692777,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733426557247240128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-p8z47,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16e14c1a-196d-42a8-b245-1a488cb9667f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7af42dff52cf31e3d0b4c5b3bb3039a69b066d99b6f46d065147ba29c75204b,PodSandboxId:95ad32628ed378cf8fe1c9cacc2bc59fc6969dc4a22ed2e11cbc6aa11f389771,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426409026160454,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sjsv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686cbc5-1b4f-44ea-89cb-70063b687718,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71878f2ac51cecfe539f367c2ff49f6bc6b40022a7dff189245bd007d0260d07,PodSandboxId:79783fce24db9824c8762aa0ebc246441d34d9d16f5b46829b9e44cac750e5b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426408724382293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-45m77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
88196078-5292-43dc-84b2-dc53af435e5c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a647561fc8a8150a221a7d9831dde01fe407024d413eda1a607ac294e573764b,PodSandboxId:ba65941872158b7f807f5608fbad458facee98a81f1ec1014ac383579eda3127,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733426408698615726,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d6e224-b304-4f84-a162-9803400c9acf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0e4de270d59927c1fd98dfbfca5bebec8750f72b7682863f1276e5cf4afe0e,PodSandboxId:5f62be7378940215f775ba016eaaba9e085a5bde8d5f3bd2af7af71b2a161ba1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733426396906111541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xr9mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2044800c-f517-439e-810b-71a114cb044e,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013c8063671c4aa3ba3a414d06a2537ce811bcd6e22e028d0ad8ab9af659022d,PodSandboxId:dc8d6361e49728eaa41e23a1d93aa34cfaa625af82fcfa2a884dd3b4f2b81c55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733426392
646389922,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw6nj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d35e1426-9151-4eb3-95fd-c2b36c126b51,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a639bf005af2020a5321599ccc56f99bd4c5be6aa0c227a6310955274ec60e3e,PodSandboxId:3cfec88984b8a0d72e94319ba62e7d4ab919d47ac556a084a2d6737ebd823e2e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173342638480
0708772,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94f9241c16c5e3fb852233a6fe3994b7,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73802addf28ef6b673245e1309d4d82c07c43374f514f1031e2a8277b4641e1a,PodSandboxId:594e9eb586b3236ea16c3700fc2cd0993924c9f7621e0cdde654b8062e9216ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733426381465280845,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44e395bdaa0336ddb64b019178e9d783,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7fcd5f7d56deb9c9698f0941fa3b61d597efc9495ed27488a425d6030baa44,PodSandboxId:c920b14cf50aa8ed9c35f9a67d873d3358f3e00a98649b822dcaf888ea4820e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733426381444138208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6cd909fedaf70356c0cea88a63589f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dec1697264029fa87be97fc70c56ce04eba1e67864a4b1b1f1e47cba052f7cf8,PodSandboxId:411118291d3f33b6d7f7a80f545d0dfdb0f0d3142d4ff4deb2a42c08e68de419,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733426381437294125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7aeab01bb9a2149eedec308e9c9b613,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c251344563e4644b942bcb793dd412b7fae15eefbb4142b68e4047db60a8fbeb,PodSandboxId:890699ae2c7d2cae9c6665fe590a645df186a046d832ec79a134309fabab3c04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733426381376403502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112c68d960b3bd38f8fac52ec570505b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ff0c03a-a7ae-4969-a1d2-5f6d147c461e name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:13 ha-106302 crio[666]: time="2024-12-05 19:26:13.014742282Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=492364df-6a24-4819-8728-03ed56519de8 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:26:13 ha-106302 crio[666]: time="2024-12-05 19:26:13.014835791Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=492364df-6a24-4819-8728-03ed56519de8 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:26:13 ha-106302 crio[666]: time="2024-12-05 19:26:13.016130164Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b94be928-4337-42a6-a776-af2a7ac64f37 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:26:13 ha-106302 crio[666]: time="2024-12-05 19:26:13.016848607Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426773016821278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b94be928-4337-42a6-a776-af2a7ac64f37 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:26:13 ha-106302 crio[666]: time="2024-12-05 19:26:13.017413179Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=97d2b810-ed4e-49da-a8c5-2ec4635055e6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:13 ha-106302 crio[666]: time="2024-12-05 19:26:13.017483203Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=97d2b810-ed4e-49da-a8c5-2ec4635055e6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:13 ha-106302 crio[666]: time="2024-12-05 19:26:13.017770645Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8175779cb574608a2e0e051ddf4963e3b0f7f7b3a0bb6082137a16800a03a08e,PodSandboxId:619925cbc39c69135172b7e76775b358b55fa47d57b5dfe0f03a5194c0692777,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733426557247240128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-p8z47,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16e14c1a-196d-42a8-b245-1a488cb9667f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7af42dff52cf31e3d0b4c5b3bb3039a69b066d99b6f46d065147ba29c75204b,PodSandboxId:95ad32628ed378cf8fe1c9cacc2bc59fc6969dc4a22ed2e11cbc6aa11f389771,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426409026160454,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sjsv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686cbc5-1b4f-44ea-89cb-70063b687718,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71878f2ac51cecfe539f367c2ff49f6bc6b40022a7dff189245bd007d0260d07,PodSandboxId:79783fce24db9824c8762aa0ebc246441d34d9d16f5b46829b9e44cac750e5b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426408724382293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-45m77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
88196078-5292-43dc-84b2-dc53af435e5c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a647561fc8a8150a221a7d9831dde01fe407024d413eda1a607ac294e573764b,PodSandboxId:ba65941872158b7f807f5608fbad458facee98a81f1ec1014ac383579eda3127,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733426408698615726,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d6e224-b304-4f84-a162-9803400c9acf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0e4de270d59927c1fd98dfbfca5bebec8750f72b7682863f1276e5cf4afe0e,PodSandboxId:5f62be7378940215f775ba016eaaba9e085a5bde8d5f3bd2af7af71b2a161ba1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733426396906111541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xr9mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2044800c-f517-439e-810b-71a114cb044e,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013c8063671c4aa3ba3a414d06a2537ce811bcd6e22e028d0ad8ab9af659022d,PodSandboxId:dc8d6361e49728eaa41e23a1d93aa34cfaa625af82fcfa2a884dd3b4f2b81c55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733426392
646389922,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw6nj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d35e1426-9151-4eb3-95fd-c2b36c126b51,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a639bf005af2020a5321599ccc56f99bd4c5be6aa0c227a6310955274ec60e3e,PodSandboxId:3cfec88984b8a0d72e94319ba62e7d4ab919d47ac556a084a2d6737ebd823e2e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173342638480
0708772,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94f9241c16c5e3fb852233a6fe3994b7,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73802addf28ef6b673245e1309d4d82c07c43374f514f1031e2a8277b4641e1a,PodSandboxId:594e9eb586b3236ea16c3700fc2cd0993924c9f7621e0cdde654b8062e9216ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733426381465280845,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44e395bdaa0336ddb64b019178e9d783,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7fcd5f7d56deb9c9698f0941fa3b61d597efc9495ed27488a425d6030baa44,PodSandboxId:c920b14cf50aa8ed9c35f9a67d873d3358f3e00a98649b822dcaf888ea4820e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733426381444138208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6cd909fedaf70356c0cea88a63589f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dec1697264029fa87be97fc70c56ce04eba1e67864a4b1b1f1e47cba052f7cf8,PodSandboxId:411118291d3f33b6d7f7a80f545d0dfdb0f0d3142d4ff4deb2a42c08e68de419,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733426381437294125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7aeab01bb9a2149eedec308e9c9b613,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c251344563e4644b942bcb793dd412b7fae15eefbb4142b68e4047db60a8fbeb,PodSandboxId:890699ae2c7d2cae9c6665fe590a645df186a046d832ec79a134309fabab3c04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733426381376403502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112c68d960b3bd38f8fac52ec570505b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=97d2b810-ed4e-49da-a8c5-2ec4635055e6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:13 ha-106302 crio[666]: time="2024-12-05 19:26:13.056852537Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8ec3e5b7-112d-4cfc-adb8-e7efb359cee9 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:26:13 ha-106302 crio[666]: time="2024-12-05 19:26:13.056990907Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8ec3e5b7-112d-4cfc-adb8-e7efb359cee9 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:26:13 ha-106302 crio[666]: time="2024-12-05 19:26:13.058037598Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a82378ed-f04a-4dd1-92e5-aef779da7885 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:26:13 ha-106302 crio[666]: time="2024-12-05 19:26:13.058957086Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426773058927367,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a82378ed-f04a-4dd1-92e5-aef779da7885 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:26:13 ha-106302 crio[666]: time="2024-12-05 19:26:13.060257698Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=540b3c6f-534d-4a85-8935-8756900ffafc name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:13 ha-106302 crio[666]: time="2024-12-05 19:26:13.060314086Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=540b3c6f-534d-4a85-8935-8756900ffafc name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:13 ha-106302 crio[666]: time="2024-12-05 19:26:13.061050769Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8175779cb574608a2e0e051ddf4963e3b0f7f7b3a0bb6082137a16800a03a08e,PodSandboxId:619925cbc39c69135172b7e76775b358b55fa47d57b5dfe0f03a5194c0692777,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733426557247240128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-p8z47,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16e14c1a-196d-42a8-b245-1a488cb9667f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7af42dff52cf31e3d0b4c5b3bb3039a69b066d99b6f46d065147ba29c75204b,PodSandboxId:95ad32628ed378cf8fe1c9cacc2bc59fc6969dc4a22ed2e11cbc6aa11f389771,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426409026160454,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sjsv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686cbc5-1b4f-44ea-89cb-70063b687718,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71878f2ac51cecfe539f367c2ff49f6bc6b40022a7dff189245bd007d0260d07,PodSandboxId:79783fce24db9824c8762aa0ebc246441d34d9d16f5b46829b9e44cac750e5b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426408724382293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-45m77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
88196078-5292-43dc-84b2-dc53af435e5c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a647561fc8a8150a221a7d9831dde01fe407024d413eda1a607ac294e573764b,PodSandboxId:ba65941872158b7f807f5608fbad458facee98a81f1ec1014ac383579eda3127,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733426408698615726,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d6e224-b304-4f84-a162-9803400c9acf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0e4de270d59927c1fd98dfbfca5bebec8750f72b7682863f1276e5cf4afe0e,PodSandboxId:5f62be7378940215f775ba016eaaba9e085a5bde8d5f3bd2af7af71b2a161ba1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733426396906111541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xr9mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2044800c-f517-439e-810b-71a114cb044e,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013c8063671c4aa3ba3a414d06a2537ce811bcd6e22e028d0ad8ab9af659022d,PodSandboxId:dc8d6361e49728eaa41e23a1d93aa34cfaa625af82fcfa2a884dd3b4f2b81c55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733426392
646389922,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw6nj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d35e1426-9151-4eb3-95fd-c2b36c126b51,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a639bf005af2020a5321599ccc56f99bd4c5be6aa0c227a6310955274ec60e3e,PodSandboxId:3cfec88984b8a0d72e94319ba62e7d4ab919d47ac556a084a2d6737ebd823e2e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173342638480
0708772,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94f9241c16c5e3fb852233a6fe3994b7,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73802addf28ef6b673245e1309d4d82c07c43374f514f1031e2a8277b4641e1a,PodSandboxId:594e9eb586b3236ea16c3700fc2cd0993924c9f7621e0cdde654b8062e9216ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733426381465280845,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44e395bdaa0336ddb64b019178e9d783,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7fcd5f7d56deb9c9698f0941fa3b61d597efc9495ed27488a425d6030baa44,PodSandboxId:c920b14cf50aa8ed9c35f9a67d873d3358f3e00a98649b822dcaf888ea4820e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733426381444138208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6cd909fedaf70356c0cea88a63589f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dec1697264029fa87be97fc70c56ce04eba1e67864a4b1b1f1e47cba052f7cf8,PodSandboxId:411118291d3f33b6d7f7a80f545d0dfdb0f0d3142d4ff4deb2a42c08e68de419,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733426381437294125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7aeab01bb9a2149eedec308e9c9b613,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c251344563e4644b942bcb793dd412b7fae15eefbb4142b68e4047db60a8fbeb,PodSandboxId:890699ae2c7d2cae9c6665fe590a645df186a046d832ec79a134309fabab3c04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733426381376403502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112c68d960b3bd38f8fac52ec570505b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=540b3c6f-534d-4a85-8935-8756900ffafc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8175779cb5746       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   619925cbc39c6       busybox-7dff88458-p8z47
	d7af42dff52cf       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   95ad32628ed37       coredns-7c65d6cfc9-sjsv2
	71878f2ac51ce       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   79783fce24db9       coredns-7c65d6cfc9-45m77
	a647561fc8a81       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   ba65941872158       storage-provisioner
	8e0e4de270d59       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16    6 minutes ago       Running             kindnet-cni               0                   5f62be7378940       kindnet-xr9mh
	013c8063671c4       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   dc8d6361e4972       kube-proxy-zw6nj
	a639bf005af20       ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e     6 minutes ago       Running             kube-vip                  0                   3cfec88984b8a       kube-vip-ha-106302
	73802addf28ef       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   594e9eb586b32       etcd-ha-106302
	8d7fcd5f7d56d       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   c920b14cf50aa       kube-apiserver-ha-106302
	dec1697264029       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   411118291d3f3       kube-scheduler-ha-106302
	c251344563e46       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   890699ae2c7d2       kube-controller-manager-ha-106302
	
	
	==> coredns [71878f2ac51cecfe539f367c2ff49f6bc6b40022a7dff189245bd007d0260d07] <==
	[INFO] 127.0.0.1:37176 - 32561 "HINFO IN 3495974066793148999.5277118907247610982. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022894865s
	[INFO] 10.244.1.2:51203 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.01735349s
	[INFO] 10.244.2.2:37733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000272502s
	[INFO] 10.244.2.2:53757 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001751263s
	[INFO] 10.244.2.2:54738 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000495007s
	[INFO] 10.244.0.4:45576 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000412263s
	[INFO] 10.244.0.4:48159 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000083837s
	[INFO] 10.244.1.2:34578 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000302061s
	[INFO] 10.244.1.2:54721 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000235254s
	[INFO] 10.244.1.2:43877 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000206178s
	[INFO] 10.244.1.2:35725 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00012413s
	[INFO] 10.244.2.2:53111 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00036507s
	[INFO] 10.244.2.2:60205 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00019223s
	[INFO] 10.244.2.2:49031 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000279282s
	[INFO] 10.244.1.2:48336 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000174589s
	[INFO] 10.244.1.2:47520 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000164259s
	[INFO] 10.244.1.2:58000 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119136s
	[INFO] 10.244.1.2:52602 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000196285s
	[INFO] 10.244.2.2:53065 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143333s
	[INFO] 10.244.0.4:50807 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119749s
	[INFO] 10.244.0.4:60692 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073699s
	[INFO] 10.244.1.2:46283 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000281341s
	[INFO] 10.244.1.2:51750 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000153725s
	[INFO] 10.244.2.2:33715 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000141245s
	[INFO] 10.244.0.4:40497 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000233306s
	
	
	==> coredns [d7af42dff52cf31e3d0b4c5b3bb3039a69b066d99b6f46d065147ba29c75204b] <==
	[INFO] 10.244.2.2:53827 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001485777s
	[INFO] 10.244.2.2:55594 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000308847s
	[INFO] 10.244.2.2:34459 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118477s
	[INFO] 10.244.2.2:39473 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062912s
	[INFO] 10.244.0.4:50797 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000084736s
	[INFO] 10.244.0.4:49715 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001903972s
	[INFO] 10.244.0.4:60150 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000344373s
	[INFO] 10.244.0.4:43238 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000075717s
	[INFO] 10.244.0.4:55133 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001508595s
	[INFO] 10.244.0.4:49161 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071435s
	[INFO] 10.244.0.4:34396 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000048471s
	[INFO] 10.244.0.4:40602 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000037032s
	[INFO] 10.244.2.2:46010 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013718s
	[INFO] 10.244.2.2:59322 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108224s
	[INFO] 10.244.2.2:38750 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000154868s
	[INFO] 10.244.0.4:43291 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123825s
	[INFO] 10.244.0.4:44515 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000163484s
	[INFO] 10.244.1.2:60479 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154514s
	[INFO] 10.244.1.2:42615 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000210654s
	[INFO] 10.244.2.2:57422 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132377s
	[INFO] 10.244.2.2:51037 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00039203s
	[INFO] 10.244.2.2:35850 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000148988s
	[INFO] 10.244.0.4:37661 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000206627s
	[INFO] 10.244.0.4:43810 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000129193s
	[INFO] 10.244.0.4:47355 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000145369s
	
	
	==> describe nodes <==
	Name:               ha-106302
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-106302
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=ha-106302
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T19_19_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 19:19:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-106302
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 19:26:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 19:22:51 +0000   Thu, 05 Dec 2024 19:19:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 19:22:51 +0000   Thu, 05 Dec 2024 19:19:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 19:22:51 +0000   Thu, 05 Dec 2024 19:19:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 19:22:51 +0000   Thu, 05 Dec 2024 19:20:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.185
	  Hostname:    ha-106302
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9fbfe8f29ea445c2a705d4735bab42d9
	  System UUID:                9fbfe8f2-9ea4-45c2-a705-d4735bab42d9
	  Boot ID:                    fbdd1078-6187-4d3e-90aa-6ba60d4d7163
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-p8z47              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 coredns-7c65d6cfc9-45m77             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m21s
	  kube-system                 coredns-7c65d6cfc9-sjsv2             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m21s
	  kube-system                 etcd-ha-106302                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m26s
	  kube-system                 kindnet-xr9mh                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m21s
	  kube-system                 kube-apiserver-ha-106302             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-controller-manager-ha-106302    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-proxy-zw6nj                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-scheduler-ha-106302             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-vip-ha-106302                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m20s  kube-proxy       
	  Normal  Starting                 6m26s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m26s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m26s  kubelet          Node ha-106302 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m26s  kubelet          Node ha-106302 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m26s  kubelet          Node ha-106302 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m22s  node-controller  Node ha-106302 event: Registered Node ha-106302 in Controller
	  Normal  NodeReady                6m5s   kubelet          Node ha-106302 status is now: NodeReady
	  Normal  RegisteredNode           5m17s  node-controller  Node ha-106302 event: Registered Node ha-106302 in Controller
	  Normal  RegisteredNode           4m2s   node-controller  Node ha-106302 event: Registered Node ha-106302 in Controller
	
	
	Name:               ha-106302-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-106302-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=ha-106302
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_05T19_20_50_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 19:20:47 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-106302-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 19:23:51 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 05 Dec 2024 19:22:50 +0000   Thu, 05 Dec 2024 19:24:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 05 Dec 2024 19:22:50 +0000   Thu, 05 Dec 2024 19:24:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 05 Dec 2024 19:22:50 +0000   Thu, 05 Dec 2024 19:24:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 05 Dec 2024 19:22:50 +0000   Thu, 05 Dec 2024 19:24:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.22
	  Hostname:    ha-106302-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3ca37a23968d4b139155a7b713c26828
	  System UUID:                3ca37a23-968d-4b13-9155-a7b713c26828
	  Boot ID:                    36db6c69-1ef9-45e9-8548-ed0c2d08168d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9kxtc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 etcd-ha-106302-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m24s
	  kube-system                 kindnet-thcsp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m26s
	  kube-system                 kube-apiserver-ha-106302-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-controller-manager-ha-106302-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 kube-proxy-n57lf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-scheduler-ha-106302-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-vip-ha-106302-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m21s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m26s (x8 over 5m26s)  kubelet          Node ha-106302-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m26s (x8 over 5m26s)  kubelet          Node ha-106302-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m26s (x7 over 5m26s)  kubelet          Node ha-106302-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m22s                  node-controller  Node ha-106302-m02 event: Registered Node ha-106302-m02 in Controller
	  Normal  RegisteredNode           5m17s                  node-controller  Node ha-106302-m02 event: Registered Node ha-106302-m02 in Controller
	  Normal  RegisteredNode           4m2s                   node-controller  Node ha-106302-m02 event: Registered Node ha-106302-m02 in Controller
	  Normal  NodeNotReady             97s                    node-controller  Node ha-106302-m02 status is now: NodeNotReady
	
	
	Name:               ha-106302-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-106302-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=ha-106302
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_05T19_22_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 19:22:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-106302-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 19:26:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 19:23:03 +0000   Thu, 05 Dec 2024 19:22:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 19:23:03 +0000   Thu, 05 Dec 2024 19:22:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 19:23:03 +0000   Thu, 05 Dec 2024 19:22:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 19:23:03 +0000   Thu, 05 Dec 2024 19:22:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.151
	  Hostname:    ha-106302-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c79436ccca5a4dcb864b64b8f1638e64
	  System UUID:                c79436cc-ca5a-4dcb-864b-64b8f1638e64
	  Boot ID:                    c0d22d1e-5115-47a7-a1b2-4a76f9bfc0f7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9tp62                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 etcd-ha-106302-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m9s
	  kube-system                 kindnet-wdsv9                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m11s
	  kube-system                 kube-apiserver-ha-106302-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-controller-manager-ha-106302-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-proxy-pghdx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-scheduler-ha-106302-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-vip-ha-106302-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m11s (x8 over 4m11s)  kubelet          Node ha-106302-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m11s (x8 over 4m11s)  kubelet          Node ha-106302-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m11s (x7 over 4m11s)  kubelet          Node ha-106302-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-106302-m03 event: Registered Node ha-106302-m03 in Controller
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-106302-m03 event: Registered Node ha-106302-m03 in Controller
	  Normal  RegisteredNode           4m2s                   node-controller  Node ha-106302-m03 event: Registered Node ha-106302-m03 in Controller
	
	
	Name:               ha-106302-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-106302-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=ha-106302
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_05T19_23_10_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 19:23:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-106302-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 19:26:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 19:23:41 +0000   Thu, 05 Dec 2024 19:23:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 19:23:41 +0000   Thu, 05 Dec 2024 19:23:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 19:23:41 +0000   Thu, 05 Dec 2024 19:23:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 19:23:41 +0000   Thu, 05 Dec 2024 19:23:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.7
	  Hostname:    ha-106302-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 230adc0a6a8a4784a2711e0f05c0dc5c
	  System UUID:                230adc0a-6a8a-4784-a271-1e0f05c0dc5c
	  Boot ID:                    c550c7a6-b9cf-4484-890e-5c6b9b697be6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4x5qd       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m3s
	  kube-system                 kube-proxy-2dvtn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m57s                kube-proxy       
	  Normal  NodeAllocatableEnforced  3m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     3m3s                 cidrAllocator    Node ha-106302-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  3m3s (x2 over 3m4s)  kubelet          Node ha-106302-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m3s (x2 over 3m4s)  kubelet          Node ha-106302-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m3s (x2 over 3m4s)  kubelet          Node ha-106302-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m2s                 node-controller  Node ha-106302-m04 event: Registered Node ha-106302-m04 in Controller
	  Normal  RegisteredNode           3m2s                 node-controller  Node ha-106302-m04 event: Registered Node ha-106302-m04 in Controller
	  Normal  RegisteredNode           3m2s                 node-controller  Node ha-106302-m04 event: Registered Node ha-106302-m04 in Controller
	  Normal  NodeReady                2m42s                kubelet          Node ha-106302-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 5 19:19] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052678] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040068] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.967635] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.737822] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.642469] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.132933] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.059010] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077817] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.173461] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.135588] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.266467] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +4.207512] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[  +3.975007] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.063464] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.124511] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.093371] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.093366] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.201097] kauditd_printk_skb: 34 callbacks suppressed
	[Dec 5 19:20] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [73802addf28ef6b673245e1309d4d82c07c43374f514f1031e2a8277b4641e1a] <==
	{"level":"warn","ts":"2024-12-05T19:26:13.365103Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:13.379161Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:13.385293Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:13.395906Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:13.408315Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:13.416283Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:13.421814Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:13.426046Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:13.435845Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:13.442795Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:13.449291Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:13.455807Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:13.459697Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:13.464907Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:13.470568Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:13.477271Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:13.484820Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:13.485102Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:13.488955Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:13.489710Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:13.493601Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:13.499275Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:13.505437Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:13.512676Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:13.565115Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:26:13 up 7 min,  0 users,  load average: 0.25, 0.27, 0.13
	Linux ha-106302 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8e0e4de270d59927c1fd98dfbfca5bebec8750f72b7682863f1276e5cf4afe0e] <==
	I1205 19:25:38.032308       1 main.go:324] Node ha-106302-m04 has CIDR [10.244.3.0/24] 
	I1205 19:25:48.037698       1 main.go:297] Handling node with IPs: map[192.168.39.185:{}]
	I1205 19:25:48.037823       1 main.go:301] handling current node
	I1205 19:25:48.037880       1 main.go:297] Handling node with IPs: map[192.168.39.22:{}]
	I1205 19:25:48.037912       1 main.go:324] Node ha-106302-m02 has CIDR [10.244.1.0/24] 
	I1205 19:25:48.038308       1 main.go:297] Handling node with IPs: map[192.168.39.151:{}]
	I1205 19:25:48.038357       1 main.go:324] Node ha-106302-m03 has CIDR [10.244.2.0/24] 
	I1205 19:25:48.038649       1 main.go:297] Handling node with IPs: map[192.168.39.7:{}]
	I1205 19:25:48.038691       1 main.go:324] Node ha-106302-m04 has CIDR [10.244.3.0/24] 
	I1205 19:25:58.032212       1 main.go:297] Handling node with IPs: map[192.168.39.185:{}]
	I1205 19:25:58.032349       1 main.go:301] handling current node
	I1205 19:25:58.032381       1 main.go:297] Handling node with IPs: map[192.168.39.22:{}]
	I1205 19:25:58.032409       1 main.go:324] Node ha-106302-m02 has CIDR [10.244.1.0/24] 
	I1205 19:25:58.032728       1 main.go:297] Handling node with IPs: map[192.168.39.151:{}]
	I1205 19:25:58.032781       1 main.go:324] Node ha-106302-m03 has CIDR [10.244.2.0/24] 
	I1205 19:25:58.032936       1 main.go:297] Handling node with IPs: map[192.168.39.7:{}]
	I1205 19:25:58.032961       1 main.go:324] Node ha-106302-m04 has CIDR [10.244.3.0/24] 
	I1205 19:26:08.033900       1 main.go:297] Handling node with IPs: map[192.168.39.185:{}]
	I1205 19:26:08.033997       1 main.go:301] handling current node
	I1205 19:26:08.034040       1 main.go:297] Handling node with IPs: map[192.168.39.22:{}]
	I1205 19:26:08.034061       1 main.go:324] Node ha-106302-m02 has CIDR [10.244.1.0/24] 
	I1205 19:26:08.034788       1 main.go:297] Handling node with IPs: map[192.168.39.151:{}]
	I1205 19:26:08.034868       1 main.go:324] Node ha-106302-m03 has CIDR [10.244.2.0/24] 
	I1205 19:26:08.035323       1 main.go:297] Handling node with IPs: map[192.168.39.7:{}]
	I1205 19:26:08.036186       1 main.go:324] Node ha-106302-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8d7fcd5f7d56deb9c9698f0941fa3b61d597efc9495ed27488a425d6030baa44] <==
	W1205 19:19:46.101456       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.185]
	I1205 19:19:46.102689       1 controller.go:615] quota admission added evaluator for: endpoints
	I1205 19:19:46.107444       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 19:19:46.330379       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1205 19:19:47.696704       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1205 19:19:47.715088       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1205 19:19:47.729079       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1205 19:19:52.034082       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1205 19:19:52.100936       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1205 19:22:38.001032       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32830: use of closed network connection
	E1205 19:22:38.204236       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32840: use of closed network connection
	E1205 19:22:38.401399       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32852: use of closed network connection
	E1205 19:22:38.650810       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32868: use of closed network connection
	E1205 19:22:38.848239       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32882: use of closed network connection
	E1205 19:22:39.039033       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32892: use of closed network connection
	E1205 19:22:39.233185       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32904: use of closed network connection
	E1205 19:22:39.423024       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32930: use of closed network connection
	E1205 19:22:39.623335       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32946: use of closed network connection
	E1205 19:22:39.929919       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32972: use of closed network connection
	E1205 19:22:40.109732       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32994: use of closed network connection
	E1205 19:22:40.313792       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33004: use of closed network connection
	E1205 19:22:40.512273       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33032: use of closed network connection
	E1205 19:22:40.696838       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33064: use of closed network connection
	E1205 19:22:40.891466       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33092: use of closed network connection
	W1205 19:23:56.103047       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.151 192.168.39.185]
	
	
	==> kube-controller-manager [c251344563e4644b942bcb793dd412b7fae15eefbb4142b68e4047db60a8fbeb] <==
	I1205 19:22:37.515258       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="61.952µs"
	I1205 19:22:50.027185       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m02"
	I1205 19:22:51.994933       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302"
	I1205 19:23:03.348987       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m03"
	I1205 19:23:10.074709       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-106302-m04\" does not exist"
	I1205 19:23:10.130455       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-106302-m04" podCIDRs=["10.244.3.0/24"]
	I1205 19:23:10.130559       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:10.130592       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:10.405830       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:10.799985       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:11.200921       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-106302-m04"
	I1205 19:23:11.286372       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:20.510971       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:31.164993       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:31.165813       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-106302-m04"
	I1205 19:23:31.181172       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:31.224422       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:41.047269       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:24:36.318018       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m02"
	I1205 19:24:36.318367       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-106302-m04"
	I1205 19:24:36.348027       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m02"
	I1205 19:24:36.462551       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="17.68033ms"
	I1205 19:24:36.463140       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="102.944µs"
	I1205 19:24:36.509355       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m02"
	I1205 19:24:41.525728       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m02"
	
	
	==> kube-proxy [013c8063671c4aa3ba3a414d06a2537ce811bcd6e22e028d0ad8ab9af659022d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 19:19:53.137314       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1205 19:19:53.171420       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.185"]
	E1205 19:19:53.171824       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 19:19:53.214655       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 19:19:53.214741       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 19:19:53.214788       1 server_linux.go:169] "Using iptables Proxier"
	I1205 19:19:53.217916       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 19:19:53.218705       1 server.go:483] "Version info" version="v1.31.2"
	I1205 19:19:53.218777       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 19:19:53.220962       1 config.go:199] "Starting service config controller"
	I1205 19:19:53.221650       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 19:19:53.221992       1 config.go:105] "Starting endpoint slice config controller"
	I1205 19:19:53.222064       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 19:19:53.223609       1 config.go:328] "Starting node config controller"
	I1205 19:19:53.226006       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 19:19:53.322722       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 19:19:53.322841       1 shared_informer.go:320] Caches are synced for service config
	I1205 19:19:53.326785       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [dec1697264029fa87be97fc70c56ce04eba1e67864a4b1b1f1e47cba052f7cf8] <==
	W1205 19:19:45.698374       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 19:19:45.698482       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 19:19:45.740149       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 19:19:45.740541       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1205 19:19:48.195246       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1205 19:22:02.375222       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tpm2m\": pod kube-proxy-tpm2m is already assigned to node \"ha-106302-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tpm2m" node="ha-106302-m03"
	E1205 19:22:02.375416       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 1976f453-f240-48ff-bcac-37351800ac58(kube-system/kube-proxy-tpm2m) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-tpm2m"
	E1205 19:22:02.375449       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tpm2m\": pod kube-proxy-tpm2m is already assigned to node \"ha-106302-m03\"" pod="kube-system/kube-proxy-tpm2m"
	I1205 19:22:02.375580       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-tpm2m" node="ha-106302-m03"
	E1205 19:22:02.382616       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-wdsv9\": pod kindnet-wdsv9 is already assigned to node \"ha-106302-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-wdsv9" node="ha-106302-m03"
	E1205 19:22:02.382763       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 83d82f5d-42c3-47be-af20-41b82c16b114(kube-system/kindnet-wdsv9) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-wdsv9"
	E1205 19:22:02.382784       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-wdsv9\": pod kindnet-wdsv9 is already assigned to node \"ha-106302-m03\"" pod="kube-system/kindnet-wdsv9"
	I1205 19:22:02.382811       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-wdsv9" node="ha-106302-m03"
	E1205 19:22:02.429049       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-pghdx\": pod kube-proxy-pghdx is already assigned to node \"ha-106302-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-pghdx" node="ha-106302-m03"
	E1205 19:22:02.429116       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 915060a3-353c-4a2c-a9d6-494206776446(kube-system/kube-proxy-pghdx) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-pghdx"
	E1205 19:22:02.429132       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-pghdx\": pod kube-proxy-pghdx is already assigned to node \"ha-106302-m03\"" pod="kube-system/kube-proxy-pghdx"
	I1205 19:22:02.429156       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-pghdx" node="ha-106302-m03"
	E1205 19:22:32.450165       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-p8z47\": pod busybox-7dff88458-p8z47 is already assigned to node \"ha-106302\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-p8z47" node="ha-106302"
	E1205 19:22:32.450464       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 16e14c1a-196d-42a8-b245-1a488cb9667f(default/busybox-7dff88458-p8z47) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-p8z47"
	E1205 19:22:32.450610       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-p8z47\": pod busybox-7dff88458-p8z47 is already assigned to node \"ha-106302\"" pod="default/busybox-7dff88458-p8z47"
	I1205 19:22:32.450729       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-p8z47" node="ha-106302"
	E1205 19:22:32.450776       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-9tp62\": pod busybox-7dff88458-9tp62 is already assigned to node \"ha-106302-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-9tp62" node="ha-106302-m03"
	E1205 19:22:32.459571       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod afb0c778-acb1-4db0-b0b6-f054049d0a9d(default/busybox-7dff88458-9tp62) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-9tp62"
	E1205 19:22:32.460188       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-9tp62\": pod busybox-7dff88458-9tp62 is already assigned to node \"ha-106302-m03\"" pod="default/busybox-7dff88458-9tp62"
	I1205 19:22:32.460282       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-9tp62" node="ha-106302-m03"
	
	
	==> kubelet <==
	Dec 05 19:24:47 ha-106302 kubelet[1308]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 19:24:47 ha-106302 kubelet[1308]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 19:24:47 ha-106302 kubelet[1308]: E1205 19:24:47.778614    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426687778175124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:24:47 ha-106302 kubelet[1308]: E1205 19:24:47.778767    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426687778175124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:24:57 ha-106302 kubelet[1308]: E1205 19:24:57.781563    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426697781244346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:24:57 ha-106302 kubelet[1308]: E1205 19:24:57.781621    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426697781244346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:07 ha-106302 kubelet[1308]: E1205 19:25:07.783663    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426707783267296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:07 ha-106302 kubelet[1308]: E1205 19:25:07.783686    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426707783267296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:17 ha-106302 kubelet[1308]: E1205 19:25:17.787301    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426717786088822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:17 ha-106302 kubelet[1308]: E1205 19:25:17.788092    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426717786088822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:27 ha-106302 kubelet[1308]: E1205 19:25:27.791254    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426727789306197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:27 ha-106302 kubelet[1308]: E1205 19:25:27.792185    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426727789306197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:37 ha-106302 kubelet[1308]: E1205 19:25:37.793643    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426737793262536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:37 ha-106302 kubelet[1308]: E1205 19:25:37.793688    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426737793262536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:47 ha-106302 kubelet[1308]: E1205 19:25:47.685793    1308 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 05 19:25:47 ha-106302 kubelet[1308]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 05 19:25:47 ha-106302 kubelet[1308]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 19:25:47 ha-106302 kubelet[1308]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 19:25:47 ha-106302 kubelet[1308]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 19:25:47 ha-106302 kubelet[1308]: E1205 19:25:47.795235    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426747794906816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:47 ha-106302 kubelet[1308]: E1205 19:25:47.795258    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426747794906816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:57 ha-106302 kubelet[1308]: E1205 19:25:57.797302    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426757796435936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:57 ha-106302 kubelet[1308]: E1205 19:25:57.798201    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426757796435936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:26:07 ha-106302 kubelet[1308]: E1205 19:26:07.800104    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426767799828720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:26:07 ha-106302 kubelet[1308]: E1205 19:26:07.800714    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426767799828720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-106302 -n ha-106302
helpers_test.go:261: (dbg) Run:  kubectl --context ha-106302 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.425888711s)
ha_test.go:415: expected profile "ha-106302" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-106302\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-106302\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-106302\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.185\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.22\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.151\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.7\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt
\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",
\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-106302 -n ha-106302
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-106302 logs -n 25: (1.468520951s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                      |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-106302 cp ha-106302-m03:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile42720673/001/cp-test_ha-106302-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m03:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302:/home/docker/cp-test_ha-106302-m03_ha-106302.txt                     |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302 sudo cat                                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m03_ha-106302.txt                               |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m03:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m02:/home/docker/cp-test_ha-106302-m03_ha-106302-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302-m02 sudo cat                                        | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m03_ha-106302-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m03:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04:/home/docker/cp-test_ha-106302-m03_ha-106302-m04.txt             |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302-m04 sudo cat                                        | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m03_ha-106302-m04.txt                           |           |         |         |                     |                     |
	| cp      | ha-106302 cp testdata/cp-test.txt                                              | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04:/home/docker/cp-test.txt                                         |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile42720673/001/cp-test_ha-106302-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302:/home/docker/cp-test_ha-106302-m04_ha-106302.txt                     |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302 sudo cat                                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m04_ha-106302.txt                               |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m02:/home/docker/cp-test_ha-106302-m04_ha-106302-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302-m02 sudo cat                                        | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m04_ha-106302-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m03:/home/docker/cp-test_ha-106302-m04_ha-106302-m03.txt             |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302-m03 sudo cat                                        | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m04_ha-106302-m03.txt                           |           |         |         |                     |                     |
	| node    | ha-106302 node stop m02 -v=7                                                   | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 19:19:05
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:19:05.666020  549077 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:19:05.666172  549077 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:19:05.666182  549077 out.go:358] Setting ErrFile to fd 2...
	I1205 19:19:05.666187  549077 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:19:05.666372  549077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 19:19:05.666982  549077 out.go:352] Setting JSON to false
	I1205 19:19:05.667993  549077 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":7292,"bootTime":1733419054,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:19:05.668118  549077 start.go:139] virtualization: kvm guest
	I1205 19:19:05.670258  549077 out.go:177] * [ha-106302] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:19:05.672244  549077 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 19:19:05.672310  549077 notify.go:220] Checking for updates...
	I1205 19:19:05.674836  549077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:19:05.676311  549077 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:19:05.677586  549077 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:19:05.678906  549077 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:19:05.680179  549077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:19:05.681501  549077 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 19:19:05.716520  549077 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 19:19:05.718361  549077 start.go:297] selected driver: kvm2
	I1205 19:19:05.718375  549077 start.go:901] validating driver "kvm2" against <nil>
	I1205 19:19:05.718387  549077 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:19:05.719138  549077 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:19:05.719217  549077 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20052-530897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 19:19:05.734721  549077 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 19:19:05.734777  549077 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 19:19:05.735145  549077 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:19:05.735198  549077 cni.go:84] Creating CNI manager for ""
	I1205 19:19:05.735258  549077 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1205 19:19:05.735271  549077 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 19:19:05.735352  549077 start.go:340] cluster config:
	{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1205 19:19:05.735498  549077 iso.go:125] acquiring lock: {Name:mk778929df466edaca8cb6d38427acedfae32b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:19:05.737389  549077 out.go:177] * Starting "ha-106302" primary control-plane node in "ha-106302" cluster
	I1205 19:19:05.738520  549077 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:19:05.738565  549077 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 19:19:05.738579  549077 cache.go:56] Caching tarball of preloaded images
	I1205 19:19:05.738663  549077 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 19:19:05.738678  549077 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 19:19:05.739034  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:19:05.739058  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json: {Name:mk36f887968924e3b867abb3b152df7882583b36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:05.739210  549077 start.go:360] acquireMachinesLock for ha-106302: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 19:19:05.739241  549077 start.go:364] duration metric: took 16.973µs to acquireMachinesLock for "ha-106302"
	I1205 19:19:05.739258  549077 start.go:93] Provisioning new machine with config: &{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:19:05.739311  549077 start.go:125] createHost starting for "" (driver="kvm2")
	I1205 19:19:05.740876  549077 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 19:19:05.741018  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:19:05.741056  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:19:05.755320  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35555
	I1205 19:19:05.755768  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:19:05.756364  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:19:05.756386  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:19:05.756720  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:19:05.756918  549077 main.go:141] libmachine: (ha-106302) Calling .GetMachineName
	I1205 19:19:05.757058  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:05.757247  549077 start.go:159] libmachine.API.Create for "ha-106302" (driver="kvm2")
	I1205 19:19:05.757287  549077 client.go:168] LocalClient.Create starting
	I1205 19:19:05.757338  549077 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem
	I1205 19:19:05.757377  549077 main.go:141] libmachine: Decoding PEM data...
	I1205 19:19:05.757396  549077 main.go:141] libmachine: Parsing certificate...
	I1205 19:19:05.757476  549077 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem
	I1205 19:19:05.757503  549077 main.go:141] libmachine: Decoding PEM data...
	I1205 19:19:05.757522  549077 main.go:141] libmachine: Parsing certificate...
	I1205 19:19:05.757549  549077 main.go:141] libmachine: Running pre-create checks...
	I1205 19:19:05.757567  549077 main.go:141] libmachine: (ha-106302) Calling .PreCreateCheck
	I1205 19:19:05.757886  549077 main.go:141] libmachine: (ha-106302) Calling .GetConfigRaw
	I1205 19:19:05.758310  549077 main.go:141] libmachine: Creating machine...
	I1205 19:19:05.758325  549077 main.go:141] libmachine: (ha-106302) Calling .Create
	I1205 19:19:05.758443  549077 main.go:141] libmachine: (ha-106302) Creating KVM machine...
	I1205 19:19:05.759563  549077 main.go:141] libmachine: (ha-106302) DBG | found existing default KVM network
	I1205 19:19:05.760292  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:05.760130  549100 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231f0}
	I1205 19:19:05.760373  549077 main.go:141] libmachine: (ha-106302) DBG | created network xml: 
	I1205 19:19:05.760394  549077 main.go:141] libmachine: (ha-106302) DBG | <network>
	I1205 19:19:05.760405  549077 main.go:141] libmachine: (ha-106302) DBG |   <name>mk-ha-106302</name>
	I1205 19:19:05.760417  549077 main.go:141] libmachine: (ha-106302) DBG |   <dns enable='no'/>
	I1205 19:19:05.760428  549077 main.go:141] libmachine: (ha-106302) DBG |   
	I1205 19:19:05.760437  549077 main.go:141] libmachine: (ha-106302) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1205 19:19:05.760450  549077 main.go:141] libmachine: (ha-106302) DBG |     <dhcp>
	I1205 19:19:05.760460  549077 main.go:141] libmachine: (ha-106302) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1205 19:19:05.760472  549077 main.go:141] libmachine: (ha-106302) DBG |     </dhcp>
	I1205 19:19:05.760488  549077 main.go:141] libmachine: (ha-106302) DBG |   </ip>
	I1205 19:19:05.760499  549077 main.go:141] libmachine: (ha-106302) DBG |   
	I1205 19:19:05.760507  549077 main.go:141] libmachine: (ha-106302) DBG | </network>
	I1205 19:19:05.760517  549077 main.go:141] libmachine: (ha-106302) DBG | 
	I1205 19:19:05.765547  549077 main.go:141] libmachine: (ha-106302) DBG | trying to create private KVM network mk-ha-106302 192.168.39.0/24...
	I1205 19:19:05.832912  549077 main.go:141] libmachine: (ha-106302) DBG | private KVM network mk-ha-106302 192.168.39.0/24 created
	I1205 19:19:05.832950  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:05.832854  549100 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:19:05.832976  549077 main.go:141] libmachine: (ha-106302) Setting up store path in /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302 ...
	I1205 19:19:05.832995  549077 main.go:141] libmachine: (ha-106302) Building disk image from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 19:19:05.833015  549077 main.go:141] libmachine: (ha-106302) Downloading /home/jenkins/minikube-integration/20052-530897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 19:19:06.116114  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:06.115928  549100 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa...
	I1205 19:19:06.195132  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:06.194945  549100 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/ha-106302.rawdisk...
	I1205 19:19:06.195166  549077 main.go:141] libmachine: (ha-106302) DBG | Writing magic tar header
	I1205 19:19:06.195176  549077 main.go:141] libmachine: (ha-106302) DBG | Writing SSH key tar header
	I1205 19:19:06.195183  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:06.195098  549100 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302 ...
	I1205 19:19:06.195194  549077 main.go:141] libmachine: (ha-106302) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302
	I1205 19:19:06.195272  549077 main.go:141] libmachine: (ha-106302) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302 (perms=drwx------)
	I1205 19:19:06.195294  549077 main.go:141] libmachine: (ha-106302) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines (perms=drwxr-xr-x)
	I1205 19:19:06.195305  549077 main.go:141] libmachine: (ha-106302) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines
	I1205 19:19:06.195321  549077 main.go:141] libmachine: (ha-106302) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:19:06.195332  549077 main.go:141] libmachine: (ha-106302) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube (perms=drwxr-xr-x)
	I1205 19:19:06.195340  549077 main.go:141] libmachine: (ha-106302) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897
	I1205 19:19:06.195349  549077 main.go:141] libmachine: (ha-106302) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 19:19:06.195354  549077 main.go:141] libmachine: (ha-106302) DBG | Checking permissions on dir: /home/jenkins
	I1205 19:19:06.195360  549077 main.go:141] libmachine: (ha-106302) DBG | Checking permissions on dir: /home
	I1205 19:19:06.195379  549077 main.go:141] libmachine: (ha-106302) DBG | Skipping /home - not owner
	I1205 19:19:06.195390  549077 main.go:141] libmachine: (ha-106302) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897 (perms=drwxrwxr-x)
	I1205 19:19:06.195397  549077 main.go:141] libmachine: (ha-106302) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 19:19:06.195403  549077 main.go:141] libmachine: (ha-106302) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 19:19:06.195409  549077 main.go:141] libmachine: (ha-106302) Creating domain...
	I1205 19:19:06.196529  549077 main.go:141] libmachine: (ha-106302) define libvirt domain using xml: 
	I1205 19:19:06.196544  549077 main.go:141] libmachine: (ha-106302) <domain type='kvm'>
	I1205 19:19:06.196550  549077 main.go:141] libmachine: (ha-106302)   <name>ha-106302</name>
	I1205 19:19:06.196561  549077 main.go:141] libmachine: (ha-106302)   <memory unit='MiB'>2200</memory>
	I1205 19:19:06.196569  549077 main.go:141] libmachine: (ha-106302)   <vcpu>2</vcpu>
	I1205 19:19:06.196578  549077 main.go:141] libmachine: (ha-106302)   <features>
	I1205 19:19:06.196586  549077 main.go:141] libmachine: (ha-106302)     <acpi/>
	I1205 19:19:06.196595  549077 main.go:141] libmachine: (ha-106302)     <apic/>
	I1205 19:19:06.196603  549077 main.go:141] libmachine: (ha-106302)     <pae/>
	I1205 19:19:06.196621  549077 main.go:141] libmachine: (ha-106302)     
	I1205 19:19:06.196632  549077 main.go:141] libmachine: (ha-106302)   </features>
	I1205 19:19:06.196643  549077 main.go:141] libmachine: (ha-106302)   <cpu mode='host-passthrough'>
	I1205 19:19:06.196652  549077 main.go:141] libmachine: (ha-106302)   
	I1205 19:19:06.196658  549077 main.go:141] libmachine: (ha-106302)   </cpu>
	I1205 19:19:06.196670  549077 main.go:141] libmachine: (ha-106302)   <os>
	I1205 19:19:06.196677  549077 main.go:141] libmachine: (ha-106302)     <type>hvm</type>
	I1205 19:19:06.196689  549077 main.go:141] libmachine: (ha-106302)     <boot dev='cdrom'/>
	I1205 19:19:06.196704  549077 main.go:141] libmachine: (ha-106302)     <boot dev='hd'/>
	I1205 19:19:06.196715  549077 main.go:141] libmachine: (ha-106302)     <bootmenu enable='no'/>
	I1205 19:19:06.196724  549077 main.go:141] libmachine: (ha-106302)   </os>
	I1205 19:19:06.196732  549077 main.go:141] libmachine: (ha-106302)   <devices>
	I1205 19:19:06.196743  549077 main.go:141] libmachine: (ha-106302)     <disk type='file' device='cdrom'>
	I1205 19:19:06.196758  549077 main.go:141] libmachine: (ha-106302)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/boot2docker.iso'/>
	I1205 19:19:06.196769  549077 main.go:141] libmachine: (ha-106302)       <target dev='hdc' bus='scsi'/>
	I1205 19:19:06.196777  549077 main.go:141] libmachine: (ha-106302)       <readonly/>
	I1205 19:19:06.196783  549077 main.go:141] libmachine: (ha-106302)     </disk>
	I1205 19:19:06.196795  549077 main.go:141] libmachine: (ha-106302)     <disk type='file' device='disk'>
	I1205 19:19:06.196806  549077 main.go:141] libmachine: (ha-106302)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 19:19:06.196821  549077 main.go:141] libmachine: (ha-106302)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/ha-106302.rawdisk'/>
	I1205 19:19:06.196833  549077 main.go:141] libmachine: (ha-106302)       <target dev='hda' bus='virtio'/>
	I1205 19:19:06.196842  549077 main.go:141] libmachine: (ha-106302)     </disk>
	I1205 19:19:06.196851  549077 main.go:141] libmachine: (ha-106302)     <interface type='network'>
	I1205 19:19:06.196861  549077 main.go:141] libmachine: (ha-106302)       <source network='mk-ha-106302'/>
	I1205 19:19:06.196873  549077 main.go:141] libmachine: (ha-106302)       <model type='virtio'/>
	I1205 19:19:06.196896  549077 main.go:141] libmachine: (ha-106302)     </interface>
	I1205 19:19:06.196909  549077 main.go:141] libmachine: (ha-106302)     <interface type='network'>
	I1205 19:19:06.196919  549077 main.go:141] libmachine: (ha-106302)       <source network='default'/>
	I1205 19:19:06.196927  549077 main.go:141] libmachine: (ha-106302)       <model type='virtio'/>
	I1205 19:19:06.196936  549077 main.go:141] libmachine: (ha-106302)     </interface>
	I1205 19:19:06.196944  549077 main.go:141] libmachine: (ha-106302)     <serial type='pty'>
	I1205 19:19:06.196953  549077 main.go:141] libmachine: (ha-106302)       <target port='0'/>
	I1205 19:19:06.196962  549077 main.go:141] libmachine: (ha-106302)     </serial>
	I1205 19:19:06.196975  549077 main.go:141] libmachine: (ha-106302)     <console type='pty'>
	I1205 19:19:06.196984  549077 main.go:141] libmachine: (ha-106302)       <target type='serial' port='0'/>
	I1205 19:19:06.196996  549077 main.go:141] libmachine: (ha-106302)     </console>
	I1205 19:19:06.197007  549077 main.go:141] libmachine: (ha-106302)     <rng model='virtio'>
	I1205 19:19:06.197017  549077 main.go:141] libmachine: (ha-106302)       <backend model='random'>/dev/random</backend>
	I1205 19:19:06.197028  549077 main.go:141] libmachine: (ha-106302)     </rng>
	I1205 19:19:06.197036  549077 main.go:141] libmachine: (ha-106302)     
	I1205 19:19:06.197055  549077 main.go:141] libmachine: (ha-106302)     
	I1205 19:19:06.197068  549077 main.go:141] libmachine: (ha-106302)   </devices>
	I1205 19:19:06.197073  549077 main.go:141] libmachine: (ha-106302) </domain>
	I1205 19:19:06.197078  549077 main.go:141] libmachine: (ha-106302) 
	I1205 19:19:06.202279  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:71:9c:4d in network default
	I1205 19:19:06.203034  549077 main.go:141] libmachine: (ha-106302) Ensuring networks are active...
	I1205 19:19:06.203055  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:06.203739  549077 main.go:141] libmachine: (ha-106302) Ensuring network default is active
	I1205 19:19:06.204123  549077 main.go:141] libmachine: (ha-106302) Ensuring network mk-ha-106302 is active
	I1205 19:19:06.204705  549077 main.go:141] libmachine: (ha-106302) Getting domain xml...
	I1205 19:19:06.205494  549077 main.go:141] libmachine: (ha-106302) Creating domain...
	I1205 19:19:07.414905  549077 main.go:141] libmachine: (ha-106302) Waiting to get IP...
	I1205 19:19:07.415701  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:07.416131  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:07.416172  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:07.416110  549100 retry.go:31] will retry after 254.984492ms: waiting for machine to come up
	I1205 19:19:07.672644  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:07.673096  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:07.673126  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:07.673025  549100 retry.go:31] will retry after 337.308268ms: waiting for machine to come up
	I1205 19:19:08.011677  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:08.012131  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:08.012153  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:08.012097  549100 retry.go:31] will retry after 331.381496ms: waiting for machine to come up
	I1205 19:19:08.344830  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:08.345286  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:08.345315  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:08.345230  549100 retry.go:31] will retry after 526.921251ms: waiting for machine to come up
	I1205 19:19:08.874020  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:08.874426  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:08.874457  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:08.874366  549100 retry.go:31] will retry after 677.76743ms: waiting for machine to come up
	I1205 19:19:09.554490  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:09.555045  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:09.555078  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:09.554953  549100 retry.go:31] will retry after 810.208397ms: waiting for machine to come up
	I1205 19:19:10.367000  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:10.367429  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:10.367463  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:10.367397  549100 retry.go:31] will retry after 1.115748222s: waiting for machine to come up
	I1205 19:19:11.484531  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:11.485067  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:11.485098  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:11.485008  549100 retry.go:31] will retry after 1.3235703s: waiting for machine to come up
	I1205 19:19:12.810602  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:12.810991  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:12.811014  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:12.810945  549100 retry.go:31] will retry after 1.831554324s: waiting for machine to come up
	I1205 19:19:14.645035  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:14.645488  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:14.645513  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:14.645439  549100 retry.go:31] will retry after 1.712987373s: waiting for machine to come up
	I1205 19:19:16.360441  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:16.361053  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:16.361095  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:16.360964  549100 retry.go:31] will retry after 1.757836043s: waiting for machine to come up
	I1205 19:19:18.120905  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:18.121462  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:18.121490  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:18.121398  549100 retry.go:31] will retry after 2.555295546s: waiting for machine to come up
	I1205 19:19:20.680255  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:20.680831  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:20.680857  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:20.680783  549100 retry.go:31] will retry after 3.433196303s: waiting for machine to come up
	I1205 19:19:24.117782  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:24.118200  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:24.118225  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:24.118165  549100 retry.go:31] will retry after 5.333530854s: waiting for machine to come up
	I1205 19:19:29.456371  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.456820  549077 main.go:141] libmachine: (ha-106302) Found IP for machine: 192.168.39.185
	I1205 19:19:29.456837  549077 main.go:141] libmachine: (ha-106302) Reserving static IP address...
	I1205 19:19:29.456845  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has current primary IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.457259  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find host DHCP lease matching {name: "ha-106302", mac: "52:54:00:3b:e4:76", ip: "192.168.39.185"} in network mk-ha-106302
	I1205 19:19:29.532847  549077 main.go:141] libmachine: (ha-106302) DBG | Getting to WaitForSSH function...
	I1205 19:19:29.532882  549077 main.go:141] libmachine: (ha-106302) Reserved static IP address: 192.168.39.185
	I1205 19:19:29.532895  549077 main.go:141] libmachine: (ha-106302) Waiting for SSH to be available...
	I1205 19:19:29.535405  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.536081  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:29.536388  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.536771  549077 main.go:141] libmachine: (ha-106302) DBG | Using SSH client type: external
	I1205 19:19:29.536915  549077 main.go:141] libmachine: (ha-106302) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa (-rw-------)
	I1205 19:19:29.536944  549077 main.go:141] libmachine: (ha-106302) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.185 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 19:19:29.536962  549077 main.go:141] libmachine: (ha-106302) DBG | About to run SSH command:
	I1205 19:19:29.536972  549077 main.go:141] libmachine: (ha-106302) DBG | exit 0
	I1205 19:19:29.664869  549077 main.go:141] libmachine: (ha-106302) DBG | SSH cmd err, output: <nil>: 
	I1205 19:19:29.665141  549077 main.go:141] libmachine: (ha-106302) KVM machine creation complete!
	I1205 19:19:29.665477  549077 main.go:141] libmachine: (ha-106302) Calling .GetConfigRaw
	I1205 19:19:29.666068  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:29.666255  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:29.666420  549077 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 19:19:29.666438  549077 main.go:141] libmachine: (ha-106302) Calling .GetState
	I1205 19:19:29.667703  549077 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 19:19:29.667716  549077 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 19:19:29.667721  549077 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 19:19:29.667726  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:29.669895  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.670221  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:29.670248  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.670353  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:29.670530  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:29.670706  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:29.670840  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:29.671003  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:19:29.671220  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:19:29.671232  549077 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 19:19:29.779777  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:19:29.779805  549077 main.go:141] libmachine: Detecting the provisioner...
	I1205 19:19:29.779833  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:29.782799  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.783132  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:29.783166  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.783331  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:29.783547  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:29.783683  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:29.783825  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:29.783999  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:19:29.784181  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:19:29.784191  549077 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 19:19:29.893268  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 19:19:29.893371  549077 main.go:141] libmachine: found compatible host: buildroot
	I1205 19:19:29.893381  549077 main.go:141] libmachine: Provisioning with buildroot...
	I1205 19:19:29.893390  549077 main.go:141] libmachine: (ha-106302) Calling .GetMachineName
	I1205 19:19:29.893630  549077 buildroot.go:166] provisioning hostname "ha-106302"
	I1205 19:19:29.893659  549077 main.go:141] libmachine: (ha-106302) Calling .GetMachineName
	I1205 19:19:29.893862  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:29.896175  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.896531  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:29.896559  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.896683  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:29.896874  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:29.897035  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:29.897188  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:29.897357  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:19:29.897522  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:19:29.897537  549077 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-106302 && echo "ha-106302" | sudo tee /etc/hostname
	I1205 19:19:30.019869  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-106302
	
	I1205 19:19:30.019903  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.022773  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.023137  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.023166  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.023330  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:30.023501  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.023684  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.023794  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:30.023973  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:19:30.024192  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:19:30.024213  549077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-106302' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-106302/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-106302' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:19:30.142377  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:19:30.142414  549077 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 19:19:30.142464  549077 buildroot.go:174] setting up certificates
	I1205 19:19:30.142480  549077 provision.go:84] configureAuth start
	I1205 19:19:30.142498  549077 main.go:141] libmachine: (ha-106302) Calling .GetMachineName
	I1205 19:19:30.142814  549077 main.go:141] libmachine: (ha-106302) Calling .GetIP
	I1205 19:19:30.145608  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.145944  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.145976  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.146132  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.148289  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.148544  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.148570  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.148679  549077 provision.go:143] copyHostCerts
	I1205 19:19:30.148727  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:19:30.148761  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 19:19:30.148778  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:19:30.148862  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 19:19:30.148936  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:19:30.148954  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 19:19:30.148960  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:19:30.148984  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 19:19:30.149037  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:19:30.149054  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 19:19:30.149058  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:19:30.149079  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 19:19:30.149123  549077 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.ha-106302 san=[127.0.0.1 192.168.39.185 ha-106302 localhost minikube]
	I1205 19:19:30.203242  549077 provision.go:177] copyRemoteCerts
	I1205 19:19:30.203307  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:19:30.203333  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.206290  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.206588  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.206621  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.206770  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:30.206956  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.207107  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:30.207262  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:19:30.291637  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 19:19:30.291726  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 19:19:30.316534  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 19:19:30.316648  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1205 19:19:30.340941  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 19:19:30.341027  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 19:19:30.365151  549077 provision.go:87] duration metric: took 222.64958ms to configureAuth
	I1205 19:19:30.365205  549077 buildroot.go:189] setting minikube options for container-runtime
	I1205 19:19:30.365380  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:19:30.365454  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.367820  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.368297  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.368331  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.368517  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:30.368750  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.368925  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.369063  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:30.369263  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:19:30.369448  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:19:30.369470  549077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:19:30.602742  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:19:30.602781  549077 main.go:141] libmachine: Checking connection to Docker...
	I1205 19:19:30.602812  549077 main.go:141] libmachine: (ha-106302) Calling .GetURL
	I1205 19:19:30.604203  549077 main.go:141] libmachine: (ha-106302) DBG | Using libvirt version 6000000
	I1205 19:19:30.606408  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.606761  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.606783  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.606936  549077 main.go:141] libmachine: Docker is up and running!
	I1205 19:19:30.606953  549077 main.go:141] libmachine: Reticulating splines...
	I1205 19:19:30.606980  549077 client.go:171] duration metric: took 24.849681626s to LocalClient.Create
	I1205 19:19:30.607004  549077 start.go:167] duration metric: took 24.849757772s to libmachine.API.Create "ha-106302"
	I1205 19:19:30.607018  549077 start.go:293] postStartSetup for "ha-106302" (driver="kvm2")
	I1205 19:19:30.607027  549077 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:19:30.607063  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:30.607325  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:19:30.607353  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.609392  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.609687  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.609717  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.609857  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:30.610024  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.610186  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:30.610314  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:19:30.696960  549077 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:19:30.708057  549077 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 19:19:30.708089  549077 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 19:19:30.708159  549077 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 19:19:30.708255  549077 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 19:19:30.708293  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /etc/ssl/certs/5381862.pem
	I1205 19:19:30.708421  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 19:19:30.723671  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:19:30.750926  549077 start.go:296] duration metric: took 143.887881ms for postStartSetup
	I1205 19:19:30.750995  549077 main.go:141] libmachine: (ha-106302) Calling .GetConfigRaw
	I1205 19:19:30.751793  549077 main.go:141] libmachine: (ha-106302) Calling .GetIP
	I1205 19:19:30.754292  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.754719  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.754767  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.755073  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:19:30.755274  549077 start.go:128] duration metric: took 25.015949989s to createHost
	I1205 19:19:30.755307  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.757830  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.758211  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.758247  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.758373  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:30.758576  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.758728  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.758849  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:30.759003  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:19:30.759199  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:19:30.759225  549077 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 19:19:30.869236  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733426370.835143064
	
	I1205 19:19:30.869266  549077 fix.go:216] guest clock: 1733426370.835143064
	I1205 19:19:30.869276  549077 fix.go:229] Guest: 2024-12-05 19:19:30.835143064 +0000 UTC Remote: 2024-12-05 19:19:30.755292155 +0000 UTC m=+25.129028552 (delta=79.850909ms)
	I1205 19:19:30.869342  549077 fix.go:200] guest clock delta is within tolerance: 79.850909ms
	I1205 19:19:30.869354  549077 start.go:83] releasing machines lock for "ha-106302", held for 25.130102669s
	I1205 19:19:30.869396  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:30.869701  549077 main.go:141] libmachine: (ha-106302) Calling .GetIP
	I1205 19:19:30.872169  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.872505  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.872550  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.872651  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:30.873195  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:30.873371  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:30.873461  549077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:19:30.873500  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.873622  549077 ssh_runner.go:195] Run: cat /version.json
	I1205 19:19:30.873648  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.876112  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.876348  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.876515  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.876544  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.876694  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:30.876787  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.876829  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.876854  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.876974  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:30.877063  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:30.877155  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.877225  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:19:30.877286  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:30.877416  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:19:30.978260  549077 ssh_runner.go:195] Run: systemctl --version
	I1205 19:19:30.984523  549077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:19:31.144577  549077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 19:19:31.150862  549077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 19:19:31.150921  549077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:19:31.168518  549077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 19:19:31.168546  549077 start.go:495] detecting cgroup driver to use...
	I1205 19:19:31.168607  549077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:19:31.184398  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:19:31.198391  549077 docker.go:217] disabling cri-docker service (if available) ...
	I1205 19:19:31.198459  549077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:19:31.212374  549077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:19:31.227092  549077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:19:31.345190  549077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:19:31.498651  549077 docker.go:233] disabling docker service ...
	I1205 19:19:31.498756  549077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:19:31.514013  549077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:19:31.527698  549077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:19:31.668291  549077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:19:31.787293  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:19:31.802121  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:19:31.821416  549077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 19:19:31.821488  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:19:31.831922  549077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:19:31.832002  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:19:31.842263  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:19:31.852580  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:19:31.863167  549077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:19:31.873525  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:19:31.883966  549077 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:19:31.901444  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:19:31.913185  549077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:19:31.922739  549077 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 19:19:31.922847  549077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 19:19:31.935394  549077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:19:31.944801  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:19:32.062619  549077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:19:32.155496  549077 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:19:32.155575  549077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:19:32.161325  549077 start.go:563] Will wait 60s for crictl version
	I1205 19:19:32.161401  549077 ssh_runner.go:195] Run: which crictl
	I1205 19:19:32.165363  549077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:19:32.206408  549077 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 19:19:32.206526  549077 ssh_runner.go:195] Run: crio --version
	I1205 19:19:32.236278  549077 ssh_runner.go:195] Run: crio --version
	I1205 19:19:32.267603  549077 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 19:19:32.269318  549077 main.go:141] libmachine: (ha-106302) Calling .GetIP
	I1205 19:19:32.272307  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:32.272654  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:32.272680  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:32.272875  549077 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 19:19:32.277254  549077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:19:32.290866  549077 kubeadm.go:883] updating cluster {Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 19:19:32.290982  549077 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:19:32.291025  549077 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:19:32.327363  549077 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 19:19:32.327433  549077 ssh_runner.go:195] Run: which lz4
	I1205 19:19:32.331533  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1205 19:19:32.331639  549077 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 19:19:32.335872  549077 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 19:19:32.335904  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 19:19:33.796243  549077 crio.go:462] duration metric: took 1.464622041s to copy over tarball
	I1205 19:19:33.796360  549077 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 19:19:35.904137  549077 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.107740538s)
	I1205 19:19:35.904177  549077 crio.go:469] duration metric: took 2.107873128s to extract the tarball
	I1205 19:19:35.904188  549077 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 19:19:35.941468  549077 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:19:35.985079  549077 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 19:19:35.985107  549077 cache_images.go:84] Images are preloaded, skipping loading
	I1205 19:19:35.985116  549077 kubeadm.go:934] updating node { 192.168.39.185 8443 v1.31.2 crio true true} ...
	I1205 19:19:35.985222  549077 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-106302 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 19:19:35.985289  549077 ssh_runner.go:195] Run: crio config
	I1205 19:19:36.034780  549077 cni.go:84] Creating CNI manager for ""
	I1205 19:19:36.034806  549077 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1205 19:19:36.034818  549077 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 19:19:36.034841  549077 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-106302 NodeName:ha-106302 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 19:19:36.035004  549077 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-106302"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.185"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 19:19:36.035032  549077 kube-vip.go:115] generating kube-vip config ...
	I1205 19:19:36.035097  549077 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 19:19:36.051693  549077 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 19:19:36.051834  549077 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1205 19:19:36.051903  549077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 19:19:36.062174  549077 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 19:19:36.062270  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1205 19:19:36.072102  549077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1205 19:19:36.089037  549077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 19:19:36.105710  549077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1205 19:19:36.122352  549077 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1205 19:19:36.139382  549077 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 19:19:36.143400  549077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:19:36.156091  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:19:36.264660  549077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:19:36.281414  549077 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302 for IP: 192.168.39.185
	I1205 19:19:36.281442  549077 certs.go:194] generating shared ca certs ...
	I1205 19:19:36.281458  549077 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:36.281638  549077 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 19:19:36.281689  549077 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 19:19:36.281704  549077 certs.go:256] generating profile certs ...
	I1205 19:19:36.281767  549077 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key
	I1205 19:19:36.281786  549077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.crt with IP's: []
	I1205 19:19:36.500418  549077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.crt ...
	I1205 19:19:36.500457  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.crt: {Name:mkb14e7bfcf7e74b43ed78fd0539344fe783f416 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:36.500681  549077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key ...
	I1205 19:19:36.500700  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key: {Name:mk7e0330a0f2228d88e0f9d58264fe1f08349563 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:36.500831  549077 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.ab85f0da
	I1205 19:19:36.500858  549077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.ab85f0da with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.185 192.168.39.254]
	I1205 19:19:36.595145  549077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.ab85f0da ...
	I1205 19:19:36.595178  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.ab85f0da: {Name:mk6fe31beb668f4be09d7ef716f12b627681f889 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:36.595356  549077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.ab85f0da ...
	I1205 19:19:36.595368  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.ab85f0da: {Name:mkb2102bd03507fee93efd6f4ad4d01650f6960d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:36.595451  549077 certs.go:381] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.ab85f0da -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt
	I1205 19:19:36.595530  549077 certs.go:385] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.ab85f0da -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key
	I1205 19:19:36.595588  549077 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key
	I1205 19:19:36.595600  549077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt with IP's: []
	I1205 19:19:36.750498  549077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt ...
	I1205 19:19:36.750528  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt: {Name:mk310719ddd3b7c13526e0d5963ab5146ba62c75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:36.750689  549077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key ...
	I1205 19:19:36.750700  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key: {Name:mka21d6cd95f23029a85e314b05925420c5b8d35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:36.750768  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 19:19:36.750785  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 19:19:36.750796  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 19:19:36.750809  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 19:19:36.750819  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 19:19:36.750831  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 19:19:36.750841  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 19:19:36.750856  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 19:19:36.750907  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 19:19:36.750946  549077 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 19:19:36.750968  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 19:19:36.750995  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 19:19:36.751018  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:19:36.751046  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 19:19:36.751085  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:19:36.751157  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem -> /usr/share/ca-certificates/538186.pem
	I1205 19:19:36.751182  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /usr/share/ca-certificates/5381862.pem
	I1205 19:19:36.751197  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:19:36.751757  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:19:36.777283  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 19:19:36.800796  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:19:36.824188  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 19:19:36.847922  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 19:19:36.871853  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 19:19:36.897433  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:19:36.923449  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 19:19:36.949838  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 19:19:36.975187  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 19:19:36.999764  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:19:37.024507  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 19:19:37.044052  549077 ssh_runner.go:195] Run: openssl version
	I1205 19:19:37.052297  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 19:19:37.068345  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 19:19:37.073536  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 19:19:37.073603  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 19:19:37.080035  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 19:19:37.091136  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 19:19:37.115623  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 19:19:37.120621  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 19:19:37.120687  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 19:19:37.126618  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 19:19:37.138669  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:19:37.150853  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:19:37.155803  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:19:37.155881  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:19:37.162049  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:19:37.174819  549077 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 19:19:37.179494  549077 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 19:19:37.179570  549077 kubeadm.go:392] StartCluster: {Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:19:37.179688  549077 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 19:19:37.179745  549077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 19:19:37.223116  549077 cri.go:89] found id: ""
	I1205 19:19:37.223191  549077 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 19:19:37.234706  549077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 19:19:37.247347  549077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 19:19:37.259258  549077 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 19:19:37.259287  549077 kubeadm.go:157] found existing configuration files:
	
	I1205 19:19:37.259336  549077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 19:19:37.269699  549077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 19:19:37.269766  549077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 19:19:37.280566  549077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 19:19:37.290999  549077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 19:19:37.291070  549077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 19:19:37.302967  549077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 19:19:37.313065  549077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 19:19:37.313160  549077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 19:19:37.323523  549077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 19:19:37.333224  549077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 19:19:37.333286  549077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 19:19:37.343725  549077 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 19:19:37.465425  549077 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 19:19:37.465503  549077 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 19:19:37.563680  549077 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 19:19:37.563837  549077 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 19:19:37.563944  549077 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 19:19:37.577125  549077 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 19:19:37.767794  549077 out.go:235]   - Generating certificates and keys ...
	I1205 19:19:37.767998  549077 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 19:19:37.768133  549077 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 19:19:37.768233  549077 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 19:19:37.823275  549077 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1205 19:19:38.256538  549077 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1205 19:19:38.418481  549077 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1205 19:19:38.506453  549077 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1205 19:19:38.506612  549077 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-106302 localhost] and IPs [192.168.39.185 127.0.0.1 ::1]
	I1205 19:19:38.599268  549077 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1205 19:19:38.599504  549077 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-106302 localhost] and IPs [192.168.39.185 127.0.0.1 ::1]
	I1205 19:19:38.721006  549077 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 19:19:38.801347  549077 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 19:19:39.020781  549077 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1205 19:19:39.020849  549077 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 19:19:39.351214  549077 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 19:19:39.652426  549077 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 19:19:39.852747  549077 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 19:19:39.949305  549077 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 19:19:40.093193  549077 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 19:19:40.093754  549077 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 19:19:40.099424  549077 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 19:19:40.101578  549077 out.go:235]   - Booting up control plane ...
	I1205 19:19:40.101681  549077 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 19:19:40.101747  549077 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 19:19:40.101808  549077 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 19:19:40.118245  549077 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 19:19:40.124419  549077 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 19:19:40.124472  549077 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 19:19:40.264350  549077 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 19:19:40.264527  549077 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 19:19:40.767072  549077 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.104658ms
	I1205 19:19:40.767195  549077 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 19:19:46.889839  549077 kubeadm.go:310] [api-check] The API server is healthy after 6.126522028s
	I1205 19:19:46.903949  549077 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 19:19:46.920566  549077 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 19:19:46.959559  549077 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 19:19:46.959762  549077 kubeadm.go:310] [mark-control-plane] Marking the node ha-106302 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 19:19:46.972882  549077 kubeadm.go:310] [bootstrap-token] Using token: hftusq.bke4u9rqswjxk9ui
	I1205 19:19:46.974672  549077 out.go:235]   - Configuring RBAC rules ...
	I1205 19:19:46.974836  549077 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 19:19:46.983462  549077 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 19:19:46.993184  549077 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 19:19:47.001254  549077 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 19:19:47.006556  549077 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 19:19:47.012815  549077 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 19:19:47.297618  549077 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 19:19:47.737983  549077 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 19:19:48.297207  549077 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 19:19:48.298256  549077 kubeadm.go:310] 
	I1205 19:19:48.298332  549077 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 19:19:48.298344  549077 kubeadm.go:310] 
	I1205 19:19:48.298499  549077 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 19:19:48.298523  549077 kubeadm.go:310] 
	I1205 19:19:48.298551  549077 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 19:19:48.298654  549077 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 19:19:48.298730  549077 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 19:19:48.298740  549077 kubeadm.go:310] 
	I1205 19:19:48.298818  549077 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 19:19:48.298835  549077 kubeadm.go:310] 
	I1205 19:19:48.298894  549077 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 19:19:48.298903  549077 kubeadm.go:310] 
	I1205 19:19:48.298967  549077 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 19:19:48.299056  549077 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 19:19:48.299139  549077 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 19:19:48.299148  549077 kubeadm.go:310] 
	I1205 19:19:48.299267  549077 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 19:19:48.299368  549077 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 19:19:48.299380  549077 kubeadm.go:310] 
	I1205 19:19:48.299496  549077 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hftusq.bke4u9rqswjxk9ui \
	I1205 19:19:48.299623  549077 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 \
	I1205 19:19:48.299658  549077 kubeadm.go:310] 	--control-plane 
	I1205 19:19:48.299667  549077 kubeadm.go:310] 
	I1205 19:19:48.299787  549077 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 19:19:48.299797  549077 kubeadm.go:310] 
	I1205 19:19:48.299896  549077 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hftusq.bke4u9rqswjxk9ui \
	I1205 19:19:48.300017  549077 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 
	I1205 19:19:48.300978  549077 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 19:19:48.301019  549077 cni.go:84] Creating CNI manager for ""
	I1205 19:19:48.301039  549077 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1205 19:19:48.302992  549077 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1205 19:19:48.304422  549077 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 19:19:48.310158  549077 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1205 19:19:48.310179  549077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1205 19:19:48.330305  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 19:19:48.708578  549077 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 19:19:48.708692  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:48.708697  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-106302 minikube.k8s.io/updated_at=2024_12_05T19_19_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=ha-106302 minikube.k8s.io/primary=true
	I1205 19:19:48.766673  549077 ops.go:34] apiserver oom_adj: -16
	I1205 19:19:48.946725  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:49.447511  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:49.947827  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:50.447219  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:50.947321  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:51.447070  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:51.946846  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:52.030950  549077 kubeadm.go:1113] duration metric: took 3.322332375s to wait for elevateKubeSystemPrivileges
	I1205 19:19:52.030984  549077 kubeadm.go:394] duration metric: took 14.851420641s to StartCluster
	I1205 19:19:52.031005  549077 settings.go:142] acquiring lock: {Name:mk53b9e6d652790a330d8f10370186624dd74692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:52.031096  549077 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:19:52.032088  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:52.032382  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 19:19:52.032390  549077 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:19:52.032418  549077 start.go:241] waiting for startup goroutines ...
	I1205 19:19:52.032436  549077 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 19:19:52.032529  549077 addons.go:69] Setting storage-provisioner=true in profile "ha-106302"
	I1205 19:19:52.032562  549077 addons.go:234] Setting addon storage-provisioner=true in "ha-106302"
	I1205 19:19:52.032575  549077 addons.go:69] Setting default-storageclass=true in profile "ha-106302"
	I1205 19:19:52.032596  549077 host.go:66] Checking if "ha-106302" exists ...
	I1205 19:19:52.032603  549077 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-106302"
	I1205 19:19:52.032616  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:19:52.032974  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:19:52.033012  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:19:52.033080  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:19:52.033128  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:19:52.048867  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37355
	I1205 19:19:52.048932  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39985
	I1205 19:19:52.049474  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:19:52.049598  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:19:52.050083  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:19:52.050108  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:19:52.050196  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:19:52.050217  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:19:52.050494  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:19:52.050547  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:19:52.050740  549077 main.go:141] libmachine: (ha-106302) Calling .GetState
	I1205 19:19:52.051108  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:19:52.051156  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:19:52.053000  549077 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:19:52.053380  549077 kapi.go:59] client config for ha-106302: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.crt", KeyFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key", CAFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 19:19:52.053986  549077 cert_rotation.go:140] Starting client certificate rotation controller
	I1205 19:19:52.054434  549077 addons.go:234] Setting addon default-storageclass=true in "ha-106302"
	I1205 19:19:52.054485  549077 host.go:66] Checking if "ha-106302" exists ...
	I1205 19:19:52.054871  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:19:52.054924  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:19:52.068403  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42463
	I1205 19:19:52.069056  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:19:52.069816  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:19:52.069851  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:19:52.070279  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:19:52.070500  549077 main.go:141] libmachine: (ha-106302) Calling .GetState
	I1205 19:19:52.071258  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35709
	I1205 19:19:52.071775  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:19:52.072386  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:19:52.072414  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:19:52.072576  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:52.072784  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:19:52.073435  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:19:52.073491  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:19:52.074239  549077 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 19:19:52.075532  549077 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:19:52.075550  549077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 19:19:52.075581  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:52.079231  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:52.079693  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:52.079729  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:52.080048  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:52.080297  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:52.080464  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:52.080625  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:19:52.090582  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41111
	I1205 19:19:52.091077  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:19:52.091649  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:19:52.091690  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:19:52.092023  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:19:52.092235  549077 main.go:141] libmachine: (ha-106302) Calling .GetState
	I1205 19:19:52.093928  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:52.094164  549077 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 19:19:52.094184  549077 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 19:19:52.094204  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:52.097425  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:52.097952  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:52.097988  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:52.098172  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:52.098357  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:52.098547  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:52.098690  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:19:52.240649  549077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:19:52.260476  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 19:19:52.326335  549077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 19:19:53.107266  549077 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1205 19:19:53.107380  549077 main.go:141] libmachine: Making call to close driver server
	I1205 19:19:53.107404  549077 main.go:141] libmachine: Making call to close driver server
	I1205 19:19:53.107428  549077 main.go:141] libmachine: (ha-106302) Calling .Close
	I1205 19:19:53.107411  549077 main.go:141] libmachine: (ha-106302) Calling .Close
	I1205 19:19:53.107855  549077 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:19:53.107863  549077 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:19:53.107872  549077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:19:53.107875  549077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:19:53.107881  549077 main.go:141] libmachine: Making call to close driver server
	I1205 19:19:53.107889  549077 main.go:141] libmachine: (ha-106302) Calling .Close
	I1205 19:19:53.107898  549077 main.go:141] libmachine: Making call to close driver server
	I1205 19:19:53.107909  549077 main.go:141] libmachine: (ha-106302) Calling .Close
	I1205 19:19:53.108388  549077 main.go:141] libmachine: (ha-106302) DBG | Closing plugin on server side
	I1205 19:19:53.108430  549077 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:19:53.108447  549077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:19:53.108523  549077 main.go:141] libmachine: (ha-106302) DBG | Closing plugin on server side
	I1205 19:19:53.108536  549077 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 19:19:53.108552  549077 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 19:19:53.108666  549077 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1205 19:19:53.108672  549077 round_trippers.go:469] Request Headers:
	I1205 19:19:53.108683  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:19:53.108690  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:19:53.108977  549077 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:19:53.109004  549077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:19:53.122784  549077 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1205 19:19:53.123463  549077 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1205 19:19:53.123481  549077 round_trippers.go:469] Request Headers:
	I1205 19:19:53.123489  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:19:53.123494  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:19:53.123497  549077 round_trippers.go:473]     Content-Type: application/json
	I1205 19:19:53.127870  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:19:53.128387  549077 main.go:141] libmachine: Making call to close driver server
	I1205 19:19:53.128421  549077 main.go:141] libmachine: (ha-106302) Calling .Close
	I1205 19:19:53.128753  549077 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:19:53.128782  549077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:19:53.130618  549077 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1205 19:19:53.131922  549077 addons.go:510] duration metric: took 1.09949066s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1205 19:19:53.131966  549077 start.go:246] waiting for cluster config update ...
	I1205 19:19:53.131976  549077 start.go:255] writing updated cluster config ...
	I1205 19:19:53.133784  549077 out.go:201] 
	I1205 19:19:53.135291  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:19:53.135384  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:19:53.137100  549077 out.go:177] * Starting "ha-106302-m02" control-plane node in "ha-106302" cluster
	I1205 19:19:53.138489  549077 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:19:53.138517  549077 cache.go:56] Caching tarball of preloaded images
	I1205 19:19:53.138635  549077 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 19:19:53.138649  549077 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 19:19:53.138720  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:19:53.138982  549077 start.go:360] acquireMachinesLock for ha-106302-m02: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 19:19:53.139025  549077 start.go:364] duration metric: took 23.765µs to acquireMachinesLock for "ha-106302-m02"
	I1205 19:19:53.139048  549077 start.go:93] Provisioning new machine with config: &{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:19:53.139118  549077 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1205 19:19:53.140509  549077 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 19:19:53.140599  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:19:53.140636  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:19:53.156622  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38951
	I1205 19:19:53.157158  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:19:53.157623  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:19:53.157649  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:19:53.157947  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:19:53.158168  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetMachineName
	I1205 19:19:53.158323  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:19:53.158520  549077 start.go:159] libmachine.API.Create for "ha-106302" (driver="kvm2")
	I1205 19:19:53.158562  549077 client.go:168] LocalClient.Create starting
	I1205 19:19:53.158607  549077 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem
	I1205 19:19:53.158656  549077 main.go:141] libmachine: Decoding PEM data...
	I1205 19:19:53.158704  549077 main.go:141] libmachine: Parsing certificate...
	I1205 19:19:53.158778  549077 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem
	I1205 19:19:53.158809  549077 main.go:141] libmachine: Decoding PEM data...
	I1205 19:19:53.158825  549077 main.go:141] libmachine: Parsing certificate...
	I1205 19:19:53.158852  549077 main.go:141] libmachine: Running pre-create checks...
	I1205 19:19:53.158863  549077 main.go:141] libmachine: (ha-106302-m02) Calling .PreCreateCheck
	I1205 19:19:53.159044  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetConfigRaw
	I1205 19:19:53.159562  549077 main.go:141] libmachine: Creating machine...
	I1205 19:19:53.159580  549077 main.go:141] libmachine: (ha-106302-m02) Calling .Create
	I1205 19:19:53.159720  549077 main.go:141] libmachine: (ha-106302-m02) Creating KVM machine...
	I1205 19:19:53.161306  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found existing default KVM network
	I1205 19:19:53.161451  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found existing private KVM network mk-ha-106302
	I1205 19:19:53.161677  549077 main.go:141] libmachine: (ha-106302-m02) Setting up store path in /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02 ...
	I1205 19:19:53.161706  549077 main.go:141] libmachine: (ha-106302-m02) Building disk image from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 19:19:53.161792  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:53.161686  549462 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:19:53.161946  549077 main.go:141] libmachine: (ha-106302-m02) Downloading /home/jenkins/minikube-integration/20052-530897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 19:19:53.454907  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:53.454778  549462 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa...
	I1205 19:19:53.629727  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:53.629571  549462 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/ha-106302-m02.rawdisk...
	I1205 19:19:53.629774  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Writing magic tar header
	I1205 19:19:53.629794  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Writing SSH key tar header
	I1205 19:19:53.629802  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:53.629693  549462 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02 ...
	I1205 19:19:53.629813  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02
	I1205 19:19:53.629877  549077 main.go:141] libmachine: (ha-106302-m02) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02 (perms=drwx------)
	I1205 19:19:53.629901  549077 main.go:141] libmachine: (ha-106302-m02) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines (perms=drwxr-xr-x)
	I1205 19:19:53.629937  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines
	I1205 19:19:53.629971  549077 main.go:141] libmachine: (ha-106302-m02) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube (perms=drwxr-xr-x)
	I1205 19:19:53.629982  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:19:53.629997  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897
	I1205 19:19:53.630005  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 19:19:53.630016  549077 main.go:141] libmachine: (ha-106302-m02) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897 (perms=drwxrwxr-x)
	I1205 19:19:53.630032  549077 main.go:141] libmachine: (ha-106302-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 19:19:53.630058  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Checking permissions on dir: /home/jenkins
	I1205 19:19:53.630069  549077 main.go:141] libmachine: (ha-106302-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 19:19:53.630084  549077 main.go:141] libmachine: (ha-106302-m02) Creating domain...
	I1205 19:19:53.630098  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Checking permissions on dir: /home
	I1205 19:19:53.630111  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Skipping /home - not owner
	I1205 19:19:53.630931  549077 main.go:141] libmachine: (ha-106302-m02) define libvirt domain using xml: 
	I1205 19:19:53.630951  549077 main.go:141] libmachine: (ha-106302-m02) <domain type='kvm'>
	I1205 19:19:53.630961  549077 main.go:141] libmachine: (ha-106302-m02)   <name>ha-106302-m02</name>
	I1205 19:19:53.630968  549077 main.go:141] libmachine: (ha-106302-m02)   <memory unit='MiB'>2200</memory>
	I1205 19:19:53.630977  549077 main.go:141] libmachine: (ha-106302-m02)   <vcpu>2</vcpu>
	I1205 19:19:53.630984  549077 main.go:141] libmachine: (ha-106302-m02)   <features>
	I1205 19:19:53.630994  549077 main.go:141] libmachine: (ha-106302-m02)     <acpi/>
	I1205 19:19:53.630998  549077 main.go:141] libmachine: (ha-106302-m02)     <apic/>
	I1205 19:19:53.631006  549077 main.go:141] libmachine: (ha-106302-m02)     <pae/>
	I1205 19:19:53.631010  549077 main.go:141] libmachine: (ha-106302-m02)     
	I1205 19:19:53.631018  549077 main.go:141] libmachine: (ha-106302-m02)   </features>
	I1205 19:19:53.631023  549077 main.go:141] libmachine: (ha-106302-m02)   <cpu mode='host-passthrough'>
	I1205 19:19:53.631031  549077 main.go:141] libmachine: (ha-106302-m02)   
	I1205 19:19:53.631048  549077 main.go:141] libmachine: (ha-106302-m02)   </cpu>
	I1205 19:19:53.631078  549077 main.go:141] libmachine: (ha-106302-m02)   <os>
	I1205 19:19:53.631098  549077 main.go:141] libmachine: (ha-106302-m02)     <type>hvm</type>
	I1205 19:19:53.631107  549077 main.go:141] libmachine: (ha-106302-m02)     <boot dev='cdrom'/>
	I1205 19:19:53.631116  549077 main.go:141] libmachine: (ha-106302-m02)     <boot dev='hd'/>
	I1205 19:19:53.631124  549077 main.go:141] libmachine: (ha-106302-m02)     <bootmenu enable='no'/>
	I1205 19:19:53.631134  549077 main.go:141] libmachine: (ha-106302-m02)   </os>
	I1205 19:19:53.631143  549077 main.go:141] libmachine: (ha-106302-m02)   <devices>
	I1205 19:19:53.631154  549077 main.go:141] libmachine: (ha-106302-m02)     <disk type='file' device='cdrom'>
	I1205 19:19:53.631183  549077 main.go:141] libmachine: (ha-106302-m02)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/boot2docker.iso'/>
	I1205 19:19:53.631194  549077 main.go:141] libmachine: (ha-106302-m02)       <target dev='hdc' bus='scsi'/>
	I1205 19:19:53.631203  549077 main.go:141] libmachine: (ha-106302-m02)       <readonly/>
	I1205 19:19:53.631212  549077 main.go:141] libmachine: (ha-106302-m02)     </disk>
	I1205 19:19:53.631221  549077 main.go:141] libmachine: (ha-106302-m02)     <disk type='file' device='disk'>
	I1205 19:19:53.631237  549077 main.go:141] libmachine: (ha-106302-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 19:19:53.631252  549077 main.go:141] libmachine: (ha-106302-m02)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/ha-106302-m02.rawdisk'/>
	I1205 19:19:53.631263  549077 main.go:141] libmachine: (ha-106302-m02)       <target dev='hda' bus='virtio'/>
	I1205 19:19:53.631274  549077 main.go:141] libmachine: (ha-106302-m02)     </disk>
	I1205 19:19:53.631284  549077 main.go:141] libmachine: (ha-106302-m02)     <interface type='network'>
	I1205 19:19:53.631293  549077 main.go:141] libmachine: (ha-106302-m02)       <source network='mk-ha-106302'/>
	I1205 19:19:53.631316  549077 main.go:141] libmachine: (ha-106302-m02)       <model type='virtio'/>
	I1205 19:19:53.631331  549077 main.go:141] libmachine: (ha-106302-m02)     </interface>
	I1205 19:19:53.631344  549077 main.go:141] libmachine: (ha-106302-m02)     <interface type='network'>
	I1205 19:19:53.631354  549077 main.go:141] libmachine: (ha-106302-m02)       <source network='default'/>
	I1205 19:19:53.631367  549077 main.go:141] libmachine: (ha-106302-m02)       <model type='virtio'/>
	I1205 19:19:53.631376  549077 main.go:141] libmachine: (ha-106302-m02)     </interface>
	I1205 19:19:53.631384  549077 main.go:141] libmachine: (ha-106302-m02)     <serial type='pty'>
	I1205 19:19:53.631393  549077 main.go:141] libmachine: (ha-106302-m02)       <target port='0'/>
	I1205 19:19:53.631401  549077 main.go:141] libmachine: (ha-106302-m02)     </serial>
	I1205 19:19:53.631415  549077 main.go:141] libmachine: (ha-106302-m02)     <console type='pty'>
	I1205 19:19:53.631426  549077 main.go:141] libmachine: (ha-106302-m02)       <target type='serial' port='0'/>
	I1205 19:19:53.631434  549077 main.go:141] libmachine: (ha-106302-m02)     </console>
	I1205 19:19:53.631446  549077 main.go:141] libmachine: (ha-106302-m02)     <rng model='virtio'>
	I1205 19:19:53.631457  549077 main.go:141] libmachine: (ha-106302-m02)       <backend model='random'>/dev/random</backend>
	I1205 19:19:53.631468  549077 main.go:141] libmachine: (ha-106302-m02)     </rng>
	I1205 19:19:53.631474  549077 main.go:141] libmachine: (ha-106302-m02)     
	I1205 19:19:53.631496  549077 main.go:141] libmachine: (ha-106302-m02)     
	I1205 19:19:53.631509  549077 main.go:141] libmachine: (ha-106302-m02)   </devices>
	I1205 19:19:53.631522  549077 main.go:141] libmachine: (ha-106302-m02) </domain>
	I1205 19:19:53.631527  549077 main.go:141] libmachine: (ha-106302-m02) 
	I1205 19:19:53.638274  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:3d:5d:13 in network default
	I1205 19:19:53.638929  549077 main.go:141] libmachine: (ha-106302-m02) Ensuring networks are active...
	I1205 19:19:53.638948  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:53.639739  549077 main.go:141] libmachine: (ha-106302-m02) Ensuring network default is active
	I1205 19:19:53.639999  549077 main.go:141] libmachine: (ha-106302-m02) Ensuring network mk-ha-106302 is active
	I1205 19:19:53.640360  549077 main.go:141] libmachine: (ha-106302-m02) Getting domain xml...
	I1205 19:19:53.640970  549077 main.go:141] libmachine: (ha-106302-m02) Creating domain...
	I1205 19:19:54.858939  549077 main.go:141] libmachine: (ha-106302-m02) Waiting to get IP...
	I1205 19:19:54.859905  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:54.860367  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:54.860447  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:54.860358  549462 retry.go:31] will retry after 210.406566ms: waiting for machine to come up
	I1205 19:19:55.072865  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:55.073270  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:55.073303  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:55.073236  549462 retry.go:31] will retry after 380.564554ms: waiting for machine to come up
	I1205 19:19:55.456055  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:55.456633  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:55.456664  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:55.456575  549462 retry.go:31] will retry after 318.906554ms: waiting for machine to come up
	I1205 19:19:55.777216  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:55.777679  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:55.777710  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:55.777619  549462 retry.go:31] will retry after 557.622429ms: waiting for machine to come up
	I1205 19:19:56.337019  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:56.337517  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:56.337547  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:56.337452  549462 retry.go:31] will retry after 733.803738ms: waiting for machine to come up
	I1205 19:19:57.072993  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:57.073519  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:57.073554  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:57.073464  549462 retry.go:31] will retry after 792.053725ms: waiting for machine to come up
	I1205 19:19:57.866686  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:57.867255  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:57.867284  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:57.867204  549462 retry.go:31] will retry after 899.083916ms: waiting for machine to come up
	I1205 19:19:58.767474  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:58.767846  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:58.767879  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:58.767799  549462 retry.go:31] will retry after 894.520794ms: waiting for machine to come up
	I1205 19:19:59.663948  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:59.664483  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:59.664517  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:59.664431  549462 retry.go:31] will retry after 1.445971502s: waiting for machine to come up
	I1205 19:20:01.112081  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:01.112472  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:20:01.112497  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:20:01.112419  549462 retry.go:31] will retry after 2.114052847s: waiting for machine to come up
	I1205 19:20:03.228602  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:03.229091  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:20:03.229116  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:20:03.229037  549462 retry.go:31] will retry after 2.786335133s: waiting for machine to come up
	I1205 19:20:06.019023  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:06.019472  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:20:06.019494  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:20:06.019436  549462 retry.go:31] will retry after 3.312152878s: waiting for machine to come up
	I1205 19:20:09.332971  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:09.333454  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:20:09.333485  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:20:09.333375  549462 retry.go:31] will retry after 4.193621264s: waiting for machine to come up
	I1205 19:20:13.528190  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:13.528561  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:20:13.528582  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:20:13.528513  549462 retry.go:31] will retry after 5.505002432s: waiting for machine to come up
	I1205 19:20:19.035383  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:19.035839  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has current primary IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:19.035869  549077 main.go:141] libmachine: (ha-106302-m02) Found IP for machine: 192.168.39.22
	I1205 19:20:19.035884  549077 main.go:141] libmachine: (ha-106302-m02) Reserving static IP address...
	I1205 19:20:19.036316  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find host DHCP lease matching {name: "ha-106302-m02", mac: "52:54:00:50:91:17", ip: "192.168.39.22"} in network mk-ha-106302
	I1205 19:20:19.111128  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Getting to WaitForSSH function...
	I1205 19:20:19.111162  549077 main.go:141] libmachine: (ha-106302-m02) Reserved static IP address: 192.168.39.22
	I1205 19:20:19.111175  549077 main.go:141] libmachine: (ha-106302-m02) Waiting for SSH to be available...
	I1205 19:20:19.113732  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:19.114085  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302
	I1205 19:20:19.114114  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find defined IP address of network mk-ha-106302 interface with MAC address 52:54:00:50:91:17
	I1205 19:20:19.114257  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Using SSH client type: external
	I1205 19:20:19.114278  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa (-rw-------)
	I1205 19:20:19.114319  549077 main.go:141] libmachine: (ha-106302-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 19:20:19.114332  549077 main.go:141] libmachine: (ha-106302-m02) DBG | About to run SSH command:
	I1205 19:20:19.114349  549077 main.go:141] libmachine: (ha-106302-m02) DBG | exit 0
	I1205 19:20:19.118035  549077 main.go:141] libmachine: (ha-106302-m02) DBG | SSH cmd err, output: exit status 255: 
	I1205 19:20:19.118057  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1205 19:20:19.118065  549077 main.go:141] libmachine: (ha-106302-m02) DBG | command : exit 0
	I1205 19:20:19.118070  549077 main.go:141] libmachine: (ha-106302-m02) DBG | err     : exit status 255
	I1205 19:20:19.118077  549077 main.go:141] libmachine: (ha-106302-m02) DBG | output  : 
	I1205 19:20:22.120219  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Getting to WaitForSSH function...
	I1205 19:20:22.122541  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.122838  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.122871  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.122905  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Using SSH client type: external
	I1205 19:20:22.122934  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa (-rw-------)
	I1205 19:20:22.122975  549077 main.go:141] libmachine: (ha-106302-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.22 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 19:20:22.122988  549077 main.go:141] libmachine: (ha-106302-m02) DBG | About to run SSH command:
	I1205 19:20:22.122997  549077 main.go:141] libmachine: (ha-106302-m02) DBG | exit 0
	I1205 19:20:22.248910  549077 main.go:141] libmachine: (ha-106302-m02) DBG | SSH cmd err, output: <nil>: 
	I1205 19:20:22.249203  549077 main.go:141] libmachine: (ha-106302-m02) KVM machine creation complete!
	I1205 19:20:22.249549  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetConfigRaw
	I1205 19:20:22.250245  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:20:22.250531  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:20:22.250724  549077 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 19:20:22.250739  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetState
	I1205 19:20:22.252145  549077 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 19:20:22.252159  549077 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 19:20:22.252171  549077 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 19:20:22.252176  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:22.255218  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.255608  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.255639  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.255817  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:22.256017  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.256246  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.256424  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:22.256663  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:20:22.256916  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1205 19:20:22.256931  549077 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 19:20:22.368260  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:20:22.368313  549077 main.go:141] libmachine: Detecting the provisioner...
	I1205 19:20:22.368324  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:22.371040  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.371460  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.371481  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.371672  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:22.371891  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.372059  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.372173  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:22.372389  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:20:22.372564  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1205 19:20:22.372578  549077 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 19:20:22.485513  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 19:20:22.485607  549077 main.go:141] libmachine: found compatible host: buildroot
	I1205 19:20:22.485621  549077 main.go:141] libmachine: Provisioning with buildroot...
	I1205 19:20:22.485637  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetMachineName
	I1205 19:20:22.485917  549077 buildroot.go:166] provisioning hostname "ha-106302-m02"
	I1205 19:20:22.485951  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetMachineName
	I1205 19:20:22.486197  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:22.489137  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.489476  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.489498  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.489650  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:22.489844  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.489970  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.490109  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:22.490248  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:20:22.490464  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1205 19:20:22.490479  549077 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-106302-m02 && echo "ha-106302-m02" | sudo tee /etc/hostname
	I1205 19:20:22.616293  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-106302-m02
	
	I1205 19:20:22.616334  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:22.618960  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.619345  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.619376  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.619593  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:22.619776  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.619933  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.620106  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:22.620296  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:20:22.620475  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1205 19:20:22.620492  549077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-106302-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-106302-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-106302-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:20:22.738362  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:20:22.738404  549077 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 19:20:22.738463  549077 buildroot.go:174] setting up certificates
	I1205 19:20:22.738483  549077 provision.go:84] configureAuth start
	I1205 19:20:22.738504  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetMachineName
	I1205 19:20:22.738844  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetIP
	I1205 19:20:22.741581  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.741992  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.742022  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.742170  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:22.744256  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.744573  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.744600  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.744740  549077 provision.go:143] copyHostCerts
	I1205 19:20:22.744774  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:20:22.744818  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 19:20:22.744828  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:20:22.744891  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 19:20:22.744975  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:20:22.744994  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 19:20:22.745000  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:20:22.745024  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 19:20:22.745615  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:20:22.745684  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 19:20:22.745691  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:20:22.745739  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 19:20:22.745877  549077 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.ha-106302-m02 san=[127.0.0.1 192.168.39.22 ha-106302-m02 localhost minikube]
	I1205 19:20:22.796359  549077 provision.go:177] copyRemoteCerts
	I1205 19:20:22.796421  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:20:22.796448  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:22.799357  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.799732  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.799766  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.799995  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:22.800198  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.800385  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:22.800538  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa Username:docker}
	I1205 19:20:22.887828  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 19:20:22.887929  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 19:20:22.916212  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 19:20:22.916319  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 19:20:22.941232  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 19:20:22.941341  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 19:20:22.967161  549077 provision.go:87] duration metric: took 228.658819ms to configureAuth
	I1205 19:20:22.967199  549077 buildroot.go:189] setting minikube options for container-runtime
	I1205 19:20:22.967392  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:20:22.967485  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:22.970286  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.970715  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.970749  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.970939  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:22.971156  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.971320  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.971433  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:22.971580  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:20:22.971846  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1205 19:20:22.971863  549077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:20:23.207888  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:20:23.207924  549077 main.go:141] libmachine: Checking connection to Docker...
	I1205 19:20:23.207935  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetURL
	I1205 19:20:23.209276  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Using libvirt version 6000000
	I1205 19:20:23.211506  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.211907  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:23.211936  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.212208  549077 main.go:141] libmachine: Docker is up and running!
	I1205 19:20:23.212224  549077 main.go:141] libmachine: Reticulating splines...
	I1205 19:20:23.212232  549077 client.go:171] duration metric: took 30.053657655s to LocalClient.Create
	I1205 19:20:23.212256  549077 start.go:167] duration metric: took 30.053742841s to libmachine.API.Create "ha-106302"
	I1205 19:20:23.212293  549077 start.go:293] postStartSetup for "ha-106302-m02" (driver="kvm2")
	I1205 19:20:23.212310  549077 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:20:23.212333  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:20:23.212577  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:20:23.212606  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:23.215114  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.215516  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:23.215546  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.215705  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:23.215924  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:23.216106  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:23.216253  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa Username:docker}
	I1205 19:20:23.304000  549077 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:20:23.308581  549077 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 19:20:23.308614  549077 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 19:20:23.308698  549077 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 19:20:23.308795  549077 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 19:20:23.308810  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /etc/ssl/certs/5381862.pem
	I1205 19:20:23.308927  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 19:20:23.319412  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:20:23.344460  549077 start.go:296] duration metric: took 132.146002ms for postStartSetup
	I1205 19:20:23.344545  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetConfigRaw
	I1205 19:20:23.345277  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetIP
	I1205 19:20:23.348207  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.348665  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:23.348693  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.348984  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:20:23.349202  549077 start.go:128] duration metric: took 30.210071126s to createHost
	I1205 19:20:23.349267  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:23.351860  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.352216  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:23.352247  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.352437  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:23.352631  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:23.352819  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:23.352959  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:23.353129  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:20:23.353382  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1205 19:20:23.353399  549077 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 19:20:23.465312  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733426423.446273328
	
	I1205 19:20:23.465337  549077 fix.go:216] guest clock: 1733426423.446273328
	I1205 19:20:23.465346  549077 fix.go:229] Guest: 2024-12-05 19:20:23.446273328 +0000 UTC Remote: 2024-12-05 19:20:23.349227376 +0000 UTC m=+77.722963766 (delta=97.045952ms)
	I1205 19:20:23.465364  549077 fix.go:200] guest clock delta is within tolerance: 97.045952ms
	I1205 19:20:23.465370  549077 start.go:83] releasing machines lock for "ha-106302-m02", held for 30.326335436s
	I1205 19:20:23.465398  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:20:23.465708  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetIP
	I1205 19:20:23.468308  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.468731  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:23.468764  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.471281  549077 out.go:177] * Found network options:
	I1205 19:20:23.472818  549077 out.go:177]   - NO_PROXY=192.168.39.185
	W1205 19:20:23.473976  549077 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 19:20:23.474014  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:20:23.474583  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:20:23.474762  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:20:23.474896  549077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:20:23.474942  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	W1205 19:20:23.474975  549077 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 19:20:23.475049  549077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:20:23.475075  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:23.477606  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.477936  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.477969  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:23.477989  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.478113  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:23.478273  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:23.478379  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:23.478405  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.478432  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:23.478613  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:23.478614  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa Username:docker}
	I1205 19:20:23.478752  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:23.478903  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:23.479088  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa Username:docker}
	I1205 19:20:23.717492  549077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 19:20:23.724398  549077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 19:20:23.724467  549077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:20:23.742377  549077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 19:20:23.742416  549077 start.go:495] detecting cgroup driver to use...
	I1205 19:20:23.742481  549077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:20:23.759474  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:20:23.774720  549077 docker.go:217] disabling cri-docker service (if available) ...
	I1205 19:20:23.774808  549077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:20:23.790887  549077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:20:23.807005  549077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:20:23.919834  549077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:20:24.073552  549077 docker.go:233] disabling docker service ...
	I1205 19:20:24.073644  549077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:20:24.088648  549077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:20:24.103156  549077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:20:24.227966  549077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:20:24.343808  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:20:24.359016  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:20:24.378372  549077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 19:20:24.378434  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:20:24.390093  549077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:20:24.390163  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:20:24.402052  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:20:24.413868  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:20:24.425063  549077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:20:24.436756  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:20:24.448351  549077 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:20:24.466246  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:20:24.477646  549077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:20:24.487958  549077 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 19:20:24.488022  549077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 19:20:24.504864  549077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:20:24.516929  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:20:24.650055  549077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:20:24.749984  549077 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:20:24.750068  549077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:20:24.754929  549077 start.go:563] Will wait 60s for crictl version
	I1205 19:20:24.754993  549077 ssh_runner.go:195] Run: which crictl
	I1205 19:20:24.758880  549077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:20:24.803432  549077 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 19:20:24.803519  549077 ssh_runner.go:195] Run: crio --version
	I1205 19:20:24.832773  549077 ssh_runner.go:195] Run: crio --version
	I1205 19:20:24.866071  549077 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 19:20:24.867336  549077 out.go:177]   - env NO_PROXY=192.168.39.185
	I1205 19:20:24.868566  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetIP
	I1205 19:20:24.871432  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:24.871918  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:24.871951  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:24.872171  549077 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 19:20:24.876554  549077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:20:24.890047  549077 mustload.go:65] Loading cluster: ha-106302
	I1205 19:20:24.890241  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:20:24.890558  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:20:24.890603  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:20:24.905579  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32811
	I1205 19:20:24.906049  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:20:24.906603  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:20:24.906625  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:20:24.906945  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:20:24.907214  549077 main.go:141] libmachine: (ha-106302) Calling .GetState
	I1205 19:20:24.908815  549077 host.go:66] Checking if "ha-106302" exists ...
	I1205 19:20:24.909241  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:20:24.909290  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:20:24.924888  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35263
	I1205 19:20:24.925342  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:20:24.925844  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:20:24.925864  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:20:24.926328  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:20:24.926542  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:20:24.926741  549077 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302 for IP: 192.168.39.22
	I1205 19:20:24.926754  549077 certs.go:194] generating shared ca certs ...
	I1205 19:20:24.926770  549077 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:20:24.926902  549077 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 19:20:24.926939  549077 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 19:20:24.926948  549077 certs.go:256] generating profile certs ...
	I1205 19:20:24.927023  549077 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key
	I1205 19:20:24.927047  549077 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.842d328c
	I1205 19:20:24.927061  549077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.842d328c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.185 192.168.39.22 192.168.39.254]
	I1205 19:20:25.018998  549077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.842d328c ...
	I1205 19:20:25.019030  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.842d328c: {Name:mkb73e87a5bbbf4f4c79d1fb041b857c135f5f2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:20:25.019217  549077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.842d328c ...
	I1205 19:20:25.019230  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.842d328c: {Name:mk2fba0e13caab29e22d03865232eceeba478b3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:20:25.019304  549077 certs.go:381] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.842d328c -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt
	I1205 19:20:25.019444  549077 certs.go:385] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.842d328c -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key
	I1205 19:20:25.019581  549077 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key
	I1205 19:20:25.019598  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 19:20:25.019611  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 19:20:25.019630  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 19:20:25.019645  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 19:20:25.019658  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 19:20:25.019670  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 19:20:25.019681  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 19:20:25.019693  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 19:20:25.019742  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 19:20:25.019769  549077 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 19:20:25.019780  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 19:20:25.019800  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 19:20:25.019822  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:20:25.019843  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 19:20:25.019881  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:20:25.019905  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /usr/share/ca-certificates/5381862.pem
	I1205 19:20:25.019919  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:20:25.019931  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem -> /usr/share/ca-certificates/538186.pem
	I1205 19:20:25.019965  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:20:25.022938  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:20:25.023319  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:20:25.023341  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:20:25.023553  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:20:25.023832  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:20:25.024047  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:20:25.024204  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:20:25.100678  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1205 19:20:25.110731  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1205 19:20:25.125160  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1205 19:20:25.130012  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1205 19:20:25.140972  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1205 19:20:25.146148  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1205 19:20:25.157617  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1205 19:20:25.162172  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1205 19:20:25.173149  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1205 19:20:25.178465  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1205 19:20:25.189406  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1205 19:20:25.193722  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1205 19:20:25.206028  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:20:25.233287  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 19:20:25.261305  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:20:25.289482  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 19:20:25.316415  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1205 19:20:25.342226  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 19:20:25.368246  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:20:25.393426  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 19:20:25.419609  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 19:20:25.445786  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:20:25.469979  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 19:20:25.493824  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1205 19:20:25.510843  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1205 19:20:25.527645  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1205 19:20:25.545705  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1205 19:20:25.563452  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1205 19:20:25.580089  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1205 19:20:25.596848  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1205 19:20:25.613807  549077 ssh_runner.go:195] Run: openssl version
	I1205 19:20:25.619697  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 19:20:25.630983  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 19:20:25.635623  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 19:20:25.635686  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 19:20:25.641677  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 19:20:25.653239  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:20:25.664932  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:20:25.669827  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:20:25.669897  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:20:25.675619  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:20:25.687127  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 19:20:25.698338  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 19:20:25.702836  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 19:20:25.702900  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 19:20:25.708667  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 19:20:25.720085  549077 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 19:20:25.724316  549077 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 19:20:25.724377  549077 kubeadm.go:934] updating node {m02 192.168.39.22 8443 v1.31.2 crio true true} ...
	I1205 19:20:25.724468  549077 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-106302-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 19:20:25.724495  549077 kube-vip.go:115] generating kube-vip config ...
	I1205 19:20:25.724527  549077 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 19:20:25.742381  549077 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 19:20:25.742481  549077 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1205 19:20:25.742576  549077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 19:20:25.753160  549077 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1205 19:20:25.753241  549077 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1205 19:20:25.763396  549077 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1205 19:20:25.763426  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 19:20:25.763482  549077 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 19:20:25.763508  549077 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1205 19:20:25.763539  549077 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1205 19:20:25.767948  549077 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1205 19:20:25.767974  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1205 19:20:27.082938  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 19:20:27.083030  549077 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 19:20:27.089029  549077 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1205 19:20:27.089083  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1205 19:20:27.157306  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:20:27.187033  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 19:20:27.187142  549077 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 19:20:27.195317  549077 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1205 19:20:27.195366  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1205 19:20:27.686796  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1205 19:20:27.697152  549077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1205 19:20:27.715018  549077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 19:20:27.734908  549077 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1205 19:20:27.752785  549077 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 19:20:27.756906  549077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:20:27.769582  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:20:27.907328  549077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:20:27.931860  549077 host.go:66] Checking if "ha-106302" exists ...
	I1205 19:20:27.932222  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:20:27.932282  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:20:27.948463  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40951
	I1205 19:20:27.949044  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:20:27.949565  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:20:27.949592  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:20:27.949925  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:20:27.950146  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:20:27.950314  549077 start.go:317] joinCluster: &{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:20:27.950422  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1205 19:20:27.950440  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:20:27.953425  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:20:27.953881  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:20:27.953912  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:20:27.954070  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:20:27.954316  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:20:27.954453  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:20:27.954606  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:20:28.113909  549077 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:20:28.113956  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kqxul8.esbt6vl0oo3pylcw --discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-106302-m02 --control-plane --apiserver-advertise-address=192.168.39.22 --apiserver-bind-port=8443"
	I1205 19:20:49.921346  549077 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kqxul8.esbt6vl0oo3pylcw --discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-106302-m02 --control-plane --apiserver-advertise-address=192.168.39.22 --apiserver-bind-port=8443": (21.80735449s)
	I1205 19:20:49.921399  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1205 19:20:50.372592  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-106302-m02 minikube.k8s.io/updated_at=2024_12_05T19_20_50_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=ha-106302 minikube.k8s.io/primary=false
	I1205 19:20:50.546557  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-106302-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1205 19:20:50.670851  549077 start.go:319] duration metric: took 22.720530002s to joinCluster
	I1205 19:20:50.670996  549077 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:20:50.671311  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:20:50.672473  549077 out.go:177] * Verifying Kubernetes components...
	I1205 19:20:50.673807  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:20:50.984620  549077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:20:51.019677  549077 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:20:51.020052  549077 kapi.go:59] client config for ha-106302: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.crt", KeyFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key", CAFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1205 19:20:51.020153  549077 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.185:8443
	I1205 19:20:51.020526  549077 node_ready.go:35] waiting up to 6m0s for node "ha-106302-m02" to be "Ready" ...
	I1205 19:20:51.020686  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:51.020701  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:51.020713  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:51.020723  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:51.041602  549077 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I1205 19:20:51.521579  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:51.521608  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:51.521618  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:51.521624  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:51.528072  549077 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 19:20:52.021672  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:52.021725  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:52.021737  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:52.021745  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:52.033142  549077 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1205 19:20:52.521343  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:52.521374  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:52.521385  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:52.521392  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:52.538251  549077 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1205 19:20:53.021297  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:53.021332  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:53.021341  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:53.021348  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:53.024986  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:53.025544  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:20:53.521241  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:53.521267  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:53.521276  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:53.521280  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:53.524346  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:54.021533  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:54.021555  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:54.021563  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:54.021566  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:54.024867  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:54.521530  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:54.521559  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:54.521573  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:54.521579  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:54.525086  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:55.020940  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:55.020967  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:55.020978  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:55.020982  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:55.024965  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:55.521541  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:55.521567  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:55.521578  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:55.521583  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:55.524843  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:55.525513  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:20:56.021561  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:56.021592  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:56.021605  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:56.021613  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:56.032511  549077 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1205 19:20:56.521545  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:56.521569  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:56.521578  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:56.521582  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:56.525173  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:57.021393  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:57.021418  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:57.021428  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:57.021452  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:57.024653  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:57.521602  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:57.521630  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:57.521642  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:57.521648  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:57.524714  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:58.021076  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:58.021102  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:58.021111  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:58.021115  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:58.024741  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:58.025390  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:20:58.521263  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:58.521301  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:58.521311  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:58.521316  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:58.524604  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:59.021545  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:59.021570  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:59.021579  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:59.021585  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:59.025044  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:59.521104  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:59.521130  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:59.521139  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:59.521142  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:59.524601  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:00.021726  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:00.021752  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:00.021761  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:00.021765  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:00.025155  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:00.025976  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:21:00.521405  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:00.521429  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:00.521438  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:00.521443  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:00.524889  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:01.021527  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:01.021552  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:01.021564  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:01.021570  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:01.025273  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:01.521362  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:01.521386  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:01.521395  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:01.521400  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:01.525347  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:02.021591  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:02.021615  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:02.021624  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:02.021629  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:02.025220  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:02.521521  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:02.521548  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:02.521557  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:02.521562  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:02.524828  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:02.525818  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:21:03.021696  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:03.021722  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:03.021731  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:03.021735  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:03.025467  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:03.521081  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:03.521106  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:03.521115  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:03.521118  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:03.525582  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:21:04.021546  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:04.021570  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:04.021579  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:04.021583  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:04.025004  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:04.520903  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:04.520929  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:04.520937  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:04.520942  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:04.524427  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:05.021518  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:05.021545  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:05.021554  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:05.021557  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:05.025066  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:05.025792  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:21:05.520844  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:05.520870  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:05.520880  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:05.520885  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:05.524450  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:06.021705  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:06.021737  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:06.021750  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:06.021757  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:06.028871  549077 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1205 19:21:06.520789  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:06.520815  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:06.520824  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:06.520829  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:06.524081  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:07.021065  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:07.021090  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:07.021099  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:07.021104  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:07.025141  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:21:07.521099  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:07.521129  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:07.521139  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:07.521142  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:07.524645  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:07.525369  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:21:08.021173  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:08.021197  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:08.021205  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:08.021211  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:08.024992  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:08.520960  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:08.520986  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:08.520994  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:08.521000  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:08.526502  549077 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 19:21:09.021508  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:09.021532  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:09.021541  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:09.021545  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:09.024675  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:09.521594  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:09.521619  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:09.521628  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:09.521631  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:09.525284  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:09.525956  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:21:10.021222  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:10.021257  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.021266  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.021271  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.024522  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:10.025029  549077 node_ready.go:49] node "ha-106302-m02" has status "Ready":"True"
	I1205 19:21:10.025048  549077 node_ready.go:38] duration metric: took 19.004494335s for node "ha-106302-m02" to be "Ready" ...
	I1205 19:21:10.025058  549077 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:21:10.025143  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:21:10.025161  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.025168  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.025172  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.029254  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:21:10.037343  549077 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-45m77" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.037449  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-45m77
	I1205 19:21:10.037458  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.037466  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.037471  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.041083  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:10.041839  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:10.041858  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.041871  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.041877  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.045415  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:10.045998  549077 pod_ready.go:93] pod "coredns-7c65d6cfc9-45m77" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:10.046023  549077 pod_ready.go:82] duration metric: took 8.64868ms for pod "coredns-7c65d6cfc9-45m77" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.046036  549077 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sjsv2" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.046126  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sjsv2
	I1205 19:21:10.046137  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.046148  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.046157  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.048885  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:21:10.049682  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:10.049701  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.049711  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.049719  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.052106  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:21:10.052838  549077 pod_ready.go:93] pod "coredns-7c65d6cfc9-sjsv2" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:10.052859  549077 pod_ready.go:82] duration metric: took 6.814644ms for pod "coredns-7c65d6cfc9-sjsv2" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.052870  549077 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.052943  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/etcd-ha-106302
	I1205 19:21:10.052958  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.052969  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.052977  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.055429  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:21:10.056066  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:10.056082  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.056091  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.056098  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.058521  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:21:10.059123  549077 pod_ready.go:93] pod "etcd-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:10.059143  549077 pod_ready.go:82] duration metric: took 6.26496ms for pod "etcd-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.059152  549077 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.059214  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/etcd-ha-106302-m02
	I1205 19:21:10.059222  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.059229  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.059234  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.061697  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:21:10.062341  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:10.062358  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.062365  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.062369  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.064629  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:21:10.065300  549077 pod_ready.go:93] pod "etcd-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:10.065321  549077 pod_ready.go:82] duration metric: took 6.163254ms for pod "etcd-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.065335  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.221800  549077 request.go:632] Waited for 156.353212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302
	I1205 19:21:10.221879  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302
	I1205 19:21:10.221887  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.221896  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.221902  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.225800  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:10.421906  549077 request.go:632] Waited for 195.38917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:10.421986  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:10.421994  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.422009  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.422020  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.425349  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:10.426055  549077 pod_ready.go:93] pod "kube-apiserver-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:10.426080  549077 pod_ready.go:82] duration metric: took 360.734464ms for pod "kube-apiserver-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.426094  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.622166  549077 request.go:632] Waited for 195.985328ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302-m02
	I1205 19:21:10.622258  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302-m02
	I1205 19:21:10.622264  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.622274  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.622278  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.626000  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:10.822214  549077 request.go:632] Waited for 195.406875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:10.822287  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:10.822292  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.822300  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.822313  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.825573  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:10.826254  549077 pod_ready.go:93] pod "kube-apiserver-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:10.826276  549077 pod_ready.go:82] duration metric: took 400.173601ms for pod "kube-apiserver-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.826290  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:11.021260  549077 request.go:632] Waited for 194.873219ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302
	I1205 19:21:11.021346  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302
	I1205 19:21:11.021355  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:11.021363  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:11.021370  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:11.024811  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:11.221934  549077 request.go:632] Waited for 196.368194ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:11.222013  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:11.222048  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:11.222064  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:11.222069  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:11.226121  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:21:11.226777  549077 pod_ready.go:93] pod "kube-controller-manager-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:11.226804  549077 pod_ready.go:82] duration metric: took 400.496709ms for pod "kube-controller-manager-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:11.226817  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:11.421793  549077 request.go:632] Waited for 194.889039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302-m02
	I1205 19:21:11.421939  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302-m02
	I1205 19:21:11.421953  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:11.421962  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:11.421966  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:11.425791  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:11.621786  549077 request.go:632] Waited for 195.325808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:11.621884  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:11.621897  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:11.621912  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:11.621921  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:11.626156  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:21:11.626616  549077 pod_ready.go:93] pod "kube-controller-manager-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:11.626639  549077 pod_ready.go:82] duration metric: took 399.812324ms for pod "kube-controller-manager-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:11.626651  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n57lf" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:11.821729  549077 request.go:632] Waited for 194.997004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n57lf
	I1205 19:21:11.821817  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n57lf
	I1205 19:21:11.821822  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:11.821831  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:11.821838  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:11.825718  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:12.021841  549077 request.go:632] Waited for 195.410535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:12.021958  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:12.021969  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:12.021977  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:12.021984  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:12.025441  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:12.025999  549077 pod_ready.go:93] pod "kube-proxy-n57lf" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:12.026021  549077 pod_ready.go:82] duration metric: took 399.361827ms for pod "kube-proxy-n57lf" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:12.026047  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zw6nj" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:12.222118  549077 request.go:632] Waited for 195.969624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zw6nj
	I1205 19:21:12.222187  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zw6nj
	I1205 19:21:12.222192  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:12.222200  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:12.222204  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:12.225785  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:12.422070  549077 request.go:632] Waited for 195.377811ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:12.422132  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:12.422137  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:12.422145  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:12.422149  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:12.426002  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:12.426709  549077 pod_ready.go:93] pod "kube-proxy-zw6nj" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:12.426735  549077 pod_ready.go:82] duration metric: took 400.678816ms for pod "kube-proxy-zw6nj" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:12.426748  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:12.621608  549077 request.go:632] Waited for 194.758143ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302
	I1205 19:21:12.621678  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302
	I1205 19:21:12.621683  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:12.621691  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:12.621699  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:12.625056  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:12.822084  549077 request.go:632] Waited for 196.278548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:12.822154  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:12.822166  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:12.822175  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:12.822178  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:12.826187  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:12.827028  549077 pod_ready.go:93] pod "kube-scheduler-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:12.827048  549077 pod_ready.go:82] duration metric: took 400.290627ms for pod "kube-scheduler-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:12.827061  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:13.021645  549077 request.go:632] Waited for 194.500049ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302-m02
	I1205 19:21:13.021737  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302-m02
	I1205 19:21:13.021746  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:13.021787  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:13.021795  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:13.025431  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:13.221555  549077 request.go:632] Waited for 195.53176ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:13.221632  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:13.221641  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:13.221652  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:13.221657  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:13.226002  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:21:13.226628  549077 pod_ready.go:93] pod "kube-scheduler-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:13.226651  549077 pod_ready.go:82] duration metric: took 399.582286ms for pod "kube-scheduler-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:13.226663  549077 pod_ready.go:39] duration metric: took 3.201594435s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:21:13.226683  549077 api_server.go:52] waiting for apiserver process to appear ...
	I1205 19:21:13.226740  549077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 19:21:13.244668  549077 api_server.go:72] duration metric: took 22.573625009s to wait for apiserver process to appear ...
	I1205 19:21:13.244706  549077 api_server.go:88] waiting for apiserver healthz status ...
	I1205 19:21:13.244737  549077 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I1205 19:21:13.252149  549077 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I1205 19:21:13.252242  549077 round_trippers.go:463] GET https://192.168.39.185:8443/version
	I1205 19:21:13.252252  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:13.252260  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:13.252283  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:13.253152  549077 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1205 19:21:13.253251  549077 api_server.go:141] control plane version: v1.31.2
	I1205 19:21:13.253269  549077 api_server.go:131] duration metric: took 8.556554ms to wait for apiserver health ...
	I1205 19:21:13.253277  549077 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 19:21:13.421707  549077 request.go:632] Waited for 168.323563ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:21:13.421778  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:21:13.421784  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:13.421803  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:13.421808  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:13.428060  549077 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 19:21:13.433027  549077 system_pods.go:59] 17 kube-system pods found
	I1205 19:21:13.433063  549077 system_pods.go:61] "coredns-7c65d6cfc9-45m77" [88196078-5292-43dc-84b2-dc53af435e5c] Running
	I1205 19:21:13.433069  549077 system_pods.go:61] "coredns-7c65d6cfc9-sjsv2" [b686cbc5-1b4f-44ea-89cb-70063b687718] Running
	I1205 19:21:13.433073  549077 system_pods.go:61] "etcd-ha-106302" [b0c81234-5186-4812-a1a2-4f035f9efabf] Running
	I1205 19:21:13.433076  549077 system_pods.go:61] "etcd-ha-106302-m02" [8c619411-697a-4eb0-8725-27811a17aba1] Running
	I1205 19:21:13.433079  549077 system_pods.go:61] "kindnet-thcsp" [e2eec41c-3ca9-42ff-801d-dfdf05f6eab2] Running
	I1205 19:21:13.433083  549077 system_pods.go:61] "kindnet-xr9mh" [2044800c-f517-439e-810b-71a114cb044e] Running
	I1205 19:21:13.433087  549077 system_pods.go:61] "kube-apiserver-ha-106302" [688ddac9-2f42-4e6b-b9e8-a9c967a7180b] Running
	I1205 19:21:13.433090  549077 system_pods.go:61] "kube-apiserver-ha-106302-m02" [ad05d27e-72e0-443e-8ad3-2d464c116f27] Running
	I1205 19:21:13.433094  549077 system_pods.go:61] "kube-controller-manager-ha-106302" [e63c5a4d-c327-4040-b679-62b5b06abec9] Running
	I1205 19:21:13.433097  549077 system_pods.go:61] "kube-controller-manager-ha-106302-m02" [fe707148-d0c6-4de3-841f-3a8143fa9217] Running
	I1205 19:21:13.433101  549077 system_pods.go:61] "kube-proxy-n57lf" [94819792-89fc-4a70-a54f-02e594b657bf] Running
	I1205 19:21:13.433104  549077 system_pods.go:61] "kube-proxy-zw6nj" [d35e1426-9151-4eb3-95fd-c2b36c126b51] Running
	I1205 19:21:13.433107  549077 system_pods.go:61] "kube-scheduler-ha-106302" [6dd32258-0ba3-4f79-8d4b-165b918bbc36] Running
	I1205 19:21:13.433110  549077 system_pods.go:61] "kube-scheduler-ha-106302-m02" [b94b6bf9-4639-47d1-92be-0cbba44e65f3] Running
	I1205 19:21:13.433114  549077 system_pods.go:61] "kube-vip-ha-106302" [03b99453-c78d-4aaf-93e8-7011ae363db4] Running
	I1205 19:21:13.433119  549077 system_pods.go:61] "kube-vip-ha-106302-m02" [2ec94818-bc15-4d60-95b4-e7f7235f0341] Running
	I1205 19:21:13.433125  549077 system_pods.go:61] "storage-provisioner" [88d6e224-b304-4f84-a162-9803400c9acf] Running
	I1205 19:21:13.433131  549077 system_pods.go:74] duration metric: took 179.848181ms to wait for pod list to return data ...
	I1205 19:21:13.433140  549077 default_sa.go:34] waiting for default service account to be created ...
	I1205 19:21:13.621481  549077 request.go:632] Waited for 188.228658ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/default/serviceaccounts
	I1205 19:21:13.621548  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/default/serviceaccounts
	I1205 19:21:13.621554  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:13.621562  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:13.621566  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:13.625432  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:13.625697  549077 default_sa.go:45] found service account: "default"
	I1205 19:21:13.625716  549077 default_sa.go:55] duration metric: took 192.568863ms for default service account to be created ...
	I1205 19:21:13.625725  549077 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 19:21:13.821886  549077 request.go:632] Waited for 196.082261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:21:13.821977  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:21:13.821988  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:13.821997  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:13.822001  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:13.828461  549077 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 19:21:13.834834  549077 system_pods.go:86] 17 kube-system pods found
	I1205 19:21:13.834869  549077 system_pods.go:89] "coredns-7c65d6cfc9-45m77" [88196078-5292-43dc-84b2-dc53af435e5c] Running
	I1205 19:21:13.834877  549077 system_pods.go:89] "coredns-7c65d6cfc9-sjsv2" [b686cbc5-1b4f-44ea-89cb-70063b687718] Running
	I1205 19:21:13.834882  549077 system_pods.go:89] "etcd-ha-106302" [b0c81234-5186-4812-a1a2-4f035f9efabf] Running
	I1205 19:21:13.834886  549077 system_pods.go:89] "etcd-ha-106302-m02" [8c619411-697a-4eb0-8725-27811a17aba1] Running
	I1205 19:21:13.834890  549077 system_pods.go:89] "kindnet-thcsp" [e2eec41c-3ca9-42ff-801d-dfdf05f6eab2] Running
	I1205 19:21:13.834894  549077 system_pods.go:89] "kindnet-xr9mh" [2044800c-f517-439e-810b-71a114cb044e] Running
	I1205 19:21:13.834898  549077 system_pods.go:89] "kube-apiserver-ha-106302" [688ddac9-2f42-4e6b-b9e8-a9c967a7180b] Running
	I1205 19:21:13.834901  549077 system_pods.go:89] "kube-apiserver-ha-106302-m02" [ad05d27e-72e0-443e-8ad3-2d464c116f27] Running
	I1205 19:21:13.834905  549077 system_pods.go:89] "kube-controller-manager-ha-106302" [e63c5a4d-c327-4040-b679-62b5b06abec9] Running
	I1205 19:21:13.834909  549077 system_pods.go:89] "kube-controller-manager-ha-106302-m02" [fe707148-d0c6-4de3-841f-3a8143fa9217] Running
	I1205 19:21:13.834912  549077 system_pods.go:89] "kube-proxy-n57lf" [94819792-89fc-4a70-a54f-02e594b657bf] Running
	I1205 19:21:13.834915  549077 system_pods.go:89] "kube-proxy-zw6nj" [d35e1426-9151-4eb3-95fd-c2b36c126b51] Running
	I1205 19:21:13.834919  549077 system_pods.go:89] "kube-scheduler-ha-106302" [6dd32258-0ba3-4f79-8d4b-165b918bbc36] Running
	I1205 19:21:13.834924  549077 system_pods.go:89] "kube-scheduler-ha-106302-m02" [b94b6bf9-4639-47d1-92be-0cbba44e65f3] Running
	I1205 19:21:13.834928  549077 system_pods.go:89] "kube-vip-ha-106302" [03b99453-c78d-4aaf-93e8-7011ae363db4] Running
	I1205 19:21:13.834935  549077 system_pods.go:89] "kube-vip-ha-106302-m02" [2ec94818-bc15-4d60-95b4-e7f7235f0341] Running
	I1205 19:21:13.834939  549077 system_pods.go:89] "storage-provisioner" [88d6e224-b304-4f84-a162-9803400c9acf] Running
	I1205 19:21:13.834946  549077 system_pods.go:126] duration metric: took 209.215629ms to wait for k8s-apps to be running ...
	I1205 19:21:13.834957  549077 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 19:21:13.835009  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:21:13.850235  549077 system_svc.go:56] duration metric: took 15.264777ms WaitForService to wait for kubelet
	I1205 19:21:13.850283  549077 kubeadm.go:582] duration metric: took 23.179247512s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:21:13.850305  549077 node_conditions.go:102] verifying NodePressure condition ...
	I1205 19:21:14.021757  549077 request.go:632] Waited for 171.347316ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes
	I1205 19:21:14.021833  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes
	I1205 19:21:14.021840  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:14.021850  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:14.021860  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:14.026541  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:21:14.027820  549077 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 19:21:14.027846  549077 node_conditions.go:123] node cpu capacity is 2
	I1205 19:21:14.027863  549077 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 19:21:14.027868  549077 node_conditions.go:123] node cpu capacity is 2
	I1205 19:21:14.027874  549077 node_conditions.go:105] duration metric: took 177.564002ms to run NodePressure ...
	I1205 19:21:14.027887  549077 start.go:241] waiting for startup goroutines ...
	I1205 19:21:14.027919  549077 start.go:255] writing updated cluster config ...
	I1205 19:21:14.029921  549077 out.go:201] 
	I1205 19:21:14.031474  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:21:14.031571  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:21:14.033173  549077 out.go:177] * Starting "ha-106302-m03" control-plane node in "ha-106302" cluster
	I1205 19:21:14.034362  549077 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:21:14.034386  549077 cache.go:56] Caching tarball of preloaded images
	I1205 19:21:14.034498  549077 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 19:21:14.034514  549077 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 19:21:14.034605  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:21:14.034796  549077 start.go:360] acquireMachinesLock for ha-106302-m03: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 19:21:14.034842  549077 start.go:364] duration metric: took 26.337µs to acquireMachinesLock for "ha-106302-m03"
	I1205 19:21:14.034860  549077 start.go:93] Provisioning new machine with config: &{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:21:14.034960  549077 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1205 19:21:14.036589  549077 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 19:21:14.036698  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:21:14.036753  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:21:14.052449  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36769
	I1205 19:21:14.052905  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:21:14.053431  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:21:14.053458  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:21:14.053758  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:21:14.053945  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetMachineName
	I1205 19:21:14.054107  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:14.054258  549077 start.go:159] libmachine.API.Create for "ha-106302" (driver="kvm2")
	I1205 19:21:14.054297  549077 client.go:168] LocalClient.Create starting
	I1205 19:21:14.054348  549077 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem
	I1205 19:21:14.054391  549077 main.go:141] libmachine: Decoding PEM data...
	I1205 19:21:14.054413  549077 main.go:141] libmachine: Parsing certificate...
	I1205 19:21:14.054484  549077 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem
	I1205 19:21:14.054515  549077 main.go:141] libmachine: Decoding PEM data...
	I1205 19:21:14.054536  549077 main.go:141] libmachine: Parsing certificate...
	I1205 19:21:14.054563  549077 main.go:141] libmachine: Running pre-create checks...
	I1205 19:21:14.054575  549077 main.go:141] libmachine: (ha-106302-m03) Calling .PreCreateCheck
	I1205 19:21:14.054725  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetConfigRaw
	I1205 19:21:14.055103  549077 main.go:141] libmachine: Creating machine...
	I1205 19:21:14.055117  549077 main.go:141] libmachine: (ha-106302-m03) Calling .Create
	I1205 19:21:14.055267  549077 main.go:141] libmachine: (ha-106302-m03) Creating KVM machine...
	I1205 19:21:14.056572  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found existing default KVM network
	I1205 19:21:14.056653  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found existing private KVM network mk-ha-106302
	I1205 19:21:14.056780  549077 main.go:141] libmachine: (ha-106302-m03) Setting up store path in /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03 ...
	I1205 19:21:14.056804  549077 main.go:141] libmachine: (ha-106302-m03) Building disk image from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 19:21:14.056850  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:14.056773  549869 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:21:14.056935  549077 main.go:141] libmachine: (ha-106302-m03) Downloading /home/jenkins/minikube-integration/20052-530897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 19:21:14.349600  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:14.349456  549869 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/id_rsa...
	I1205 19:21:14.429525  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:14.429393  549869 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/ha-106302-m03.rawdisk...
	I1205 19:21:14.429558  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Writing magic tar header
	I1205 19:21:14.429573  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Writing SSH key tar header
	I1205 19:21:14.429586  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:14.429511  549869 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03 ...
	I1205 19:21:14.429599  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03
	I1205 19:21:14.429612  549077 main.go:141] libmachine: (ha-106302-m03) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03 (perms=drwx------)
	I1205 19:21:14.429633  549077 main.go:141] libmachine: (ha-106302-m03) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines (perms=drwxr-xr-x)
	I1205 19:21:14.429648  549077 main.go:141] libmachine: (ha-106302-m03) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube (perms=drwxr-xr-x)
	I1205 19:21:14.429664  549077 main.go:141] libmachine: (ha-106302-m03) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897 (perms=drwxrwxr-x)
	I1205 19:21:14.429734  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines
	I1205 19:21:14.429769  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:21:14.429779  549077 main.go:141] libmachine: (ha-106302-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 19:21:14.429798  549077 main.go:141] libmachine: (ha-106302-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 19:21:14.429808  549077 main.go:141] libmachine: (ha-106302-m03) Creating domain...
	I1205 19:21:14.429823  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897
	I1205 19:21:14.429833  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 19:21:14.429861  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Checking permissions on dir: /home/jenkins
	I1205 19:21:14.429878  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Checking permissions on dir: /home
	I1205 19:21:14.429910  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Skipping /home - not owner
	I1205 19:21:14.430728  549077 main.go:141] libmachine: (ha-106302-m03) define libvirt domain using xml: 
	I1205 19:21:14.430737  549077 main.go:141] libmachine: (ha-106302-m03) <domain type='kvm'>
	I1205 19:21:14.430743  549077 main.go:141] libmachine: (ha-106302-m03)   <name>ha-106302-m03</name>
	I1205 19:21:14.430748  549077 main.go:141] libmachine: (ha-106302-m03)   <memory unit='MiB'>2200</memory>
	I1205 19:21:14.430753  549077 main.go:141] libmachine: (ha-106302-m03)   <vcpu>2</vcpu>
	I1205 19:21:14.430758  549077 main.go:141] libmachine: (ha-106302-m03)   <features>
	I1205 19:21:14.430762  549077 main.go:141] libmachine: (ha-106302-m03)     <acpi/>
	I1205 19:21:14.430769  549077 main.go:141] libmachine: (ha-106302-m03)     <apic/>
	I1205 19:21:14.430774  549077 main.go:141] libmachine: (ha-106302-m03)     <pae/>
	I1205 19:21:14.430778  549077 main.go:141] libmachine: (ha-106302-m03)     
	I1205 19:21:14.430783  549077 main.go:141] libmachine: (ha-106302-m03)   </features>
	I1205 19:21:14.430790  549077 main.go:141] libmachine: (ha-106302-m03)   <cpu mode='host-passthrough'>
	I1205 19:21:14.430795  549077 main.go:141] libmachine: (ha-106302-m03)   
	I1205 19:21:14.430801  549077 main.go:141] libmachine: (ha-106302-m03)   </cpu>
	I1205 19:21:14.430806  549077 main.go:141] libmachine: (ha-106302-m03)   <os>
	I1205 19:21:14.430811  549077 main.go:141] libmachine: (ha-106302-m03)     <type>hvm</type>
	I1205 19:21:14.430816  549077 main.go:141] libmachine: (ha-106302-m03)     <boot dev='cdrom'/>
	I1205 19:21:14.430823  549077 main.go:141] libmachine: (ha-106302-m03)     <boot dev='hd'/>
	I1205 19:21:14.430849  549077 main.go:141] libmachine: (ha-106302-m03)     <bootmenu enable='no'/>
	I1205 19:21:14.430873  549077 main.go:141] libmachine: (ha-106302-m03)   </os>
	I1205 19:21:14.430884  549077 main.go:141] libmachine: (ha-106302-m03)   <devices>
	I1205 19:21:14.430900  549077 main.go:141] libmachine: (ha-106302-m03)     <disk type='file' device='cdrom'>
	I1205 19:21:14.430917  549077 main.go:141] libmachine: (ha-106302-m03)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/boot2docker.iso'/>
	I1205 19:21:14.430928  549077 main.go:141] libmachine: (ha-106302-m03)       <target dev='hdc' bus='scsi'/>
	I1205 19:21:14.430936  549077 main.go:141] libmachine: (ha-106302-m03)       <readonly/>
	I1205 19:21:14.430944  549077 main.go:141] libmachine: (ha-106302-m03)     </disk>
	I1205 19:21:14.430951  549077 main.go:141] libmachine: (ha-106302-m03)     <disk type='file' device='disk'>
	I1205 19:21:14.430963  549077 main.go:141] libmachine: (ha-106302-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 19:21:14.431003  549077 main.go:141] libmachine: (ha-106302-m03)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/ha-106302-m03.rawdisk'/>
	I1205 19:21:14.431029  549077 main.go:141] libmachine: (ha-106302-m03)       <target dev='hda' bus='virtio'/>
	I1205 19:21:14.431041  549077 main.go:141] libmachine: (ha-106302-m03)     </disk>
	I1205 19:21:14.431052  549077 main.go:141] libmachine: (ha-106302-m03)     <interface type='network'>
	I1205 19:21:14.431065  549077 main.go:141] libmachine: (ha-106302-m03)       <source network='mk-ha-106302'/>
	I1205 19:21:14.431075  549077 main.go:141] libmachine: (ha-106302-m03)       <model type='virtio'/>
	I1205 19:21:14.431084  549077 main.go:141] libmachine: (ha-106302-m03)     </interface>
	I1205 19:21:14.431096  549077 main.go:141] libmachine: (ha-106302-m03)     <interface type='network'>
	I1205 19:21:14.431107  549077 main.go:141] libmachine: (ha-106302-m03)       <source network='default'/>
	I1205 19:21:14.431122  549077 main.go:141] libmachine: (ha-106302-m03)       <model type='virtio'/>
	I1205 19:21:14.431134  549077 main.go:141] libmachine: (ha-106302-m03)     </interface>
	I1205 19:21:14.431143  549077 main.go:141] libmachine: (ha-106302-m03)     <serial type='pty'>
	I1205 19:21:14.431151  549077 main.go:141] libmachine: (ha-106302-m03)       <target port='0'/>
	I1205 19:21:14.431161  549077 main.go:141] libmachine: (ha-106302-m03)     </serial>
	I1205 19:21:14.431168  549077 main.go:141] libmachine: (ha-106302-m03)     <console type='pty'>
	I1205 19:21:14.431178  549077 main.go:141] libmachine: (ha-106302-m03)       <target type='serial' port='0'/>
	I1205 19:21:14.431186  549077 main.go:141] libmachine: (ha-106302-m03)     </console>
	I1205 19:21:14.431201  549077 main.go:141] libmachine: (ha-106302-m03)     <rng model='virtio'>
	I1205 19:21:14.431213  549077 main.go:141] libmachine: (ha-106302-m03)       <backend model='random'>/dev/random</backend>
	I1205 19:21:14.431223  549077 main.go:141] libmachine: (ha-106302-m03)     </rng>
	I1205 19:21:14.431230  549077 main.go:141] libmachine: (ha-106302-m03)     
	I1205 19:21:14.431248  549077 main.go:141] libmachine: (ha-106302-m03)     
	I1205 19:21:14.431260  549077 main.go:141] libmachine: (ha-106302-m03)   </devices>
	I1205 19:21:14.431266  549077 main.go:141] libmachine: (ha-106302-m03) </domain>
	I1205 19:21:14.431276  549077 main.go:141] libmachine: (ha-106302-m03) 
	I1205 19:21:14.438494  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:19:ce:fd in network default
	I1205 19:21:14.439230  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:14.439249  549077 main.go:141] libmachine: (ha-106302-m03) Ensuring networks are active...
	I1205 19:21:14.440093  549077 main.go:141] libmachine: (ha-106302-m03) Ensuring network default is active
	I1205 19:21:14.440381  549077 main.go:141] libmachine: (ha-106302-m03) Ensuring network mk-ha-106302 is active
	I1205 19:21:14.440705  549077 main.go:141] libmachine: (ha-106302-m03) Getting domain xml...
	I1205 19:21:14.441404  549077 main.go:141] libmachine: (ha-106302-m03) Creating domain...
	I1205 19:21:15.693271  549077 main.go:141] libmachine: (ha-106302-m03) Waiting to get IP...
	I1205 19:21:15.694143  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:15.694577  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:15.694598  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:15.694548  549869 retry.go:31] will retry after 242.776885ms: waiting for machine to come up
	I1205 19:21:15.939062  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:15.939524  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:15.939551  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:15.939479  549869 retry.go:31] will retry after 378.968491ms: waiting for machine to come up
	I1205 19:21:16.320454  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:16.320979  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:16.321027  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:16.320939  549869 retry.go:31] will retry after 344.418245ms: waiting for machine to come up
	I1205 19:21:16.667478  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:16.667854  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:16.667886  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:16.667793  549869 retry.go:31] will retry after 423.913988ms: waiting for machine to come up
	I1205 19:21:17.093467  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:17.093883  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:17.093914  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:17.093826  549869 retry.go:31] will retry after 515.714654ms: waiting for machine to come up
	I1205 19:21:17.611140  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:17.611460  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:17.611485  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:17.611417  549869 retry.go:31] will retry after 696.033751ms: waiting for machine to come up
	I1205 19:21:18.308904  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:18.309411  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:18.309441  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:18.309369  549869 retry.go:31] will retry after 785.032938ms: waiting for machine to come up
	I1205 19:21:19.095780  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:19.096341  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:19.096368  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:19.096298  549869 retry.go:31] will retry after 896.435978ms: waiting for machine to come up
	I1205 19:21:19.994107  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:19.994555  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:19.994578  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:19.994515  549869 retry.go:31] will retry after 1.855664433s: waiting for machine to come up
	I1205 19:21:21.852199  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:21.852746  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:21.852782  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:21.852681  549869 retry.go:31] will retry after 1.846119751s: waiting for machine to come up
	I1205 19:21:23.701581  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:23.702157  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:23.702188  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:23.702108  549869 retry.go:31] will retry after 2.613135019s: waiting for machine to come up
	I1205 19:21:26.317749  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:26.318296  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:26.318317  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:26.318258  549869 retry.go:31] will retry after 3.299144229s: waiting for machine to come up
	I1205 19:21:29.618947  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:29.619445  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:29.619480  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:29.619393  549869 retry.go:31] will retry after 3.447245355s: waiting for machine to come up
	I1205 19:21:33.071166  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:33.071564  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:33.071595  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:33.071509  549869 retry.go:31] will retry after 3.459206484s: waiting for machine to come up
	I1205 19:21:36.533492  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.533999  549077 main.go:141] libmachine: (ha-106302-m03) Found IP for machine: 192.168.39.151
	I1205 19:21:36.534029  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has current primary IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.534063  549077 main.go:141] libmachine: (ha-106302-m03) Reserving static IP address...
	I1205 19:21:36.534590  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find host DHCP lease matching {name: "ha-106302-m03", mac: "52:54:00:e6:65:e2", ip: "192.168.39.151"} in network mk-ha-106302
	I1205 19:21:36.616736  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Getting to WaitForSSH function...
	I1205 19:21:36.616827  549077 main.go:141] libmachine: (ha-106302-m03) Reserved static IP address: 192.168.39.151
	I1205 19:21:36.616852  549077 main.go:141] libmachine: (ha-106302-m03) Waiting for SSH to be available...
	I1205 19:21:36.619362  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.620041  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:36.620071  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.620207  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Using SSH client type: external
	I1205 19:21:36.620243  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/id_rsa (-rw-------)
	I1205 19:21:36.620289  549077 main.go:141] libmachine: (ha-106302-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 19:21:36.620307  549077 main.go:141] libmachine: (ha-106302-m03) DBG | About to run SSH command:
	I1205 19:21:36.620323  549077 main.go:141] libmachine: (ha-106302-m03) DBG | exit 0
	I1205 19:21:36.748331  549077 main.go:141] libmachine: (ha-106302-m03) DBG | SSH cmd err, output: <nil>: 
	I1205 19:21:36.748638  549077 main.go:141] libmachine: (ha-106302-m03) KVM machine creation complete!
	I1205 19:21:36.748951  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetConfigRaw
	I1205 19:21:36.749696  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:36.749899  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:36.750158  549077 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 19:21:36.750177  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetState
	I1205 19:21:36.751459  549077 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 19:21:36.751496  549077 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 19:21:36.751505  549077 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 19:21:36.751516  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:36.753721  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.754147  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:36.754180  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.754321  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:36.754488  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:36.754635  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:36.754782  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:36.754931  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:21:36.755238  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.151 22 <nil> <nil>}
	I1205 19:21:36.755253  549077 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 19:21:36.859924  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:21:36.859961  549077 main.go:141] libmachine: Detecting the provisioner...
	I1205 19:21:36.859974  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:36.864316  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.864691  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:36.864716  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.864886  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:36.865081  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:36.865227  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:36.865363  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:36.865505  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:21:36.865742  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.151 22 <nil> <nil>}
	I1205 19:21:36.865757  549077 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 19:21:36.969493  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 19:21:36.969588  549077 main.go:141] libmachine: found compatible host: buildroot
	I1205 19:21:36.969602  549077 main.go:141] libmachine: Provisioning with buildroot...
	I1205 19:21:36.969613  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetMachineName
	I1205 19:21:36.969955  549077 buildroot.go:166] provisioning hostname "ha-106302-m03"
	I1205 19:21:36.969984  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetMachineName
	I1205 19:21:36.970178  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:36.972856  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.973248  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:36.973275  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.973447  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:36.973641  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:36.973807  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:36.973971  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:36.974182  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:21:36.974409  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.151 22 <nil> <nil>}
	I1205 19:21:36.974424  549077 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-106302-m03 && echo "ha-106302-m03" | sudo tee /etc/hostname
	I1205 19:21:37.091631  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-106302-m03
	
	I1205 19:21:37.091670  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:37.095049  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.095508  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.095538  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.095711  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:37.095892  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.096106  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.096340  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:37.096575  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:21:37.096743  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.151 22 <nil> <nil>}
	I1205 19:21:37.096759  549077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-106302-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-106302-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-106302-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:21:37.210648  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:21:37.210686  549077 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 19:21:37.210703  549077 buildroot.go:174] setting up certificates
	I1205 19:21:37.210719  549077 provision.go:84] configureAuth start
	I1205 19:21:37.210728  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetMachineName
	I1205 19:21:37.211084  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetIP
	I1205 19:21:37.214307  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.214777  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.214811  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.214993  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:37.217609  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.218026  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.218059  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.218357  549077 provision.go:143] copyHostCerts
	I1205 19:21:37.218397  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:21:37.218443  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 19:21:37.218457  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:21:37.218538  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 19:21:37.218640  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:21:37.218667  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 19:21:37.218672  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:21:37.218707  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 19:21:37.218773  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:21:37.218800  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 19:21:37.218810  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:21:37.218844  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 19:21:37.218931  549077 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.ha-106302-m03 san=[127.0.0.1 192.168.39.151 ha-106302-m03 localhost minikube]
	I1205 19:21:37.343754  549077 provision.go:177] copyRemoteCerts
	I1205 19:21:37.343819  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:21:37.343847  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:37.346846  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.347219  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.347248  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.347438  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:37.347639  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.347948  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:37.348134  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/id_rsa Username:docker}
	I1205 19:21:37.432798  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 19:21:37.432880  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 19:21:37.459881  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 19:21:37.459950  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 19:21:37.486599  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 19:21:37.486685  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 19:21:37.511864  549077 provision.go:87] duration metric: took 301.129005ms to configureAuth
	I1205 19:21:37.511899  549077 buildroot.go:189] setting minikube options for container-runtime
	I1205 19:21:37.512151  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:21:37.512247  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:37.515413  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.515827  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.515873  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.516082  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:37.516362  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.516553  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.516696  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:37.516848  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:21:37.517021  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.151 22 <nil> <nil>}
	I1205 19:21:37.517041  549077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:21:37.766182  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:21:37.766214  549077 main.go:141] libmachine: Checking connection to Docker...
	I1205 19:21:37.766223  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetURL
	I1205 19:21:37.767491  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Using libvirt version 6000000
	I1205 19:21:37.770234  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.770645  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.770683  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.770820  549077 main.go:141] libmachine: Docker is up and running!
	I1205 19:21:37.770836  549077 main.go:141] libmachine: Reticulating splines...
	I1205 19:21:37.770844  549077 client.go:171] duration metric: took 23.716534789s to LocalClient.Create
	I1205 19:21:37.770869  549077 start.go:167] duration metric: took 23.716613038s to libmachine.API.Create "ha-106302"
	I1205 19:21:37.770879  549077 start.go:293] postStartSetup for "ha-106302-m03" (driver="kvm2")
	I1205 19:21:37.770890  549077 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:21:37.770909  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:37.771260  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:21:37.771293  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:37.773751  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.774322  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.774351  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.774623  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:37.774898  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.775132  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:37.775318  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/id_rsa Username:docker}
	I1205 19:21:37.864963  549077 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:21:37.869224  549077 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 19:21:37.869250  549077 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 19:21:37.869346  549077 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 19:21:37.869450  549077 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 19:21:37.869464  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /etc/ssl/certs/5381862.pem
	I1205 19:21:37.869572  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 19:21:37.878920  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:21:37.904695  549077 start.go:296] duration metric: took 133.797994ms for postStartSetup
	I1205 19:21:37.904759  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetConfigRaw
	I1205 19:21:37.905447  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetIP
	I1205 19:21:37.908301  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.908672  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.908702  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.908956  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:21:37.909156  549077 start.go:128] duration metric: took 23.874183503s to createHost
	I1205 19:21:37.909187  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:37.911450  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.911786  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.911820  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.911891  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:37.912073  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.912217  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.912383  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:37.912551  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:21:37.912721  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.151 22 <nil> <nil>}
	I1205 19:21:37.912731  549077 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 19:21:38.013720  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733426497.965708253
	
	I1205 19:21:38.013754  549077 fix.go:216] guest clock: 1733426497.965708253
	I1205 19:21:38.013766  549077 fix.go:229] Guest: 2024-12-05 19:21:37.965708253 +0000 UTC Remote: 2024-12-05 19:21:37.909171964 +0000 UTC m=+152.282908362 (delta=56.536289ms)
	I1205 19:21:38.013790  549077 fix.go:200] guest clock delta is within tolerance: 56.536289ms
	I1205 19:21:38.013799  549077 start.go:83] releasing machines lock for "ha-106302-m03", held for 23.978946471s
	I1205 19:21:38.013827  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:38.014134  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetIP
	I1205 19:21:38.016789  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:38.017218  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:38.017243  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:38.019529  549077 out.go:177] * Found network options:
	I1205 19:21:38.020846  549077 out.go:177]   - NO_PROXY=192.168.39.185,192.168.39.22
	W1205 19:21:38.022010  549077 proxy.go:119] fail to check proxy env: Error ip not in block
	W1205 19:21:38.022031  549077 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 19:21:38.022044  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:38.022565  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:38.022780  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:38.022889  549077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:21:38.022930  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	W1205 19:21:38.022997  549077 proxy.go:119] fail to check proxy env: Error ip not in block
	W1205 19:21:38.023035  549077 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 19:21:38.023141  549077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:21:38.023159  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:38.025672  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:38.025960  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:38.026079  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:38.026109  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:38.026225  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:38.026344  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:38.026368  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:38.026432  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:38.026548  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:38.026555  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:38.026676  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:38.026727  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/id_rsa Username:docker}
	I1205 19:21:38.026820  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:38.026963  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/id_rsa Username:docker}
	I1205 19:21:38.262374  549077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 19:21:38.269119  549077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 19:21:38.269192  549077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:21:38.288736  549077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 19:21:38.288773  549077 start.go:495] detecting cgroup driver to use...
	I1205 19:21:38.288918  549077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:21:38.308145  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:21:38.324419  549077 docker.go:217] disabling cri-docker service (if available) ...
	I1205 19:21:38.324486  549077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:21:38.340495  549077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:21:38.356196  549077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:21:38.499051  549077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:21:38.664170  549077 docker.go:233] disabling docker service ...
	I1205 19:21:38.664261  549077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:21:38.679720  549077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:21:38.693887  549077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:21:38.835246  549077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:21:38.967777  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:21:38.984739  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:21:39.005139  549077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 19:21:39.005219  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:21:39.018668  549077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:21:39.018748  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:21:39.030582  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:21:39.042783  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:21:39.055956  549077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:21:39.068121  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:21:39.079421  549077 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:21:39.099262  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:21:39.112188  549077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:21:39.123835  549077 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 19:21:39.123897  549077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 19:21:39.142980  549077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:21:39.158784  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:21:39.282396  549077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:21:39.381886  549077 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:21:39.381979  549077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:21:39.387103  549077 start.go:563] Will wait 60s for crictl version
	I1205 19:21:39.387165  549077 ssh_runner.go:195] Run: which crictl
	I1205 19:21:39.391338  549077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:21:39.433516  549077 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 19:21:39.433618  549077 ssh_runner.go:195] Run: crio --version
	I1205 19:21:39.463442  549077 ssh_runner.go:195] Run: crio --version
	I1205 19:21:39.493740  549077 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 19:21:39.495019  549077 out.go:177]   - env NO_PROXY=192.168.39.185
	I1205 19:21:39.496240  549077 out.go:177]   - env NO_PROXY=192.168.39.185,192.168.39.22
	I1205 19:21:39.497508  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetIP
	I1205 19:21:39.500359  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:39.500726  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:39.500755  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:39.500911  549077 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 19:21:39.505557  549077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:21:39.519317  549077 mustload.go:65] Loading cluster: ha-106302
	I1205 19:21:39.519614  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:21:39.519880  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:21:39.519923  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:21:39.535653  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45847
	I1205 19:21:39.536186  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:21:39.536801  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:21:39.536826  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:21:39.537227  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:21:39.537444  549077 main.go:141] libmachine: (ha-106302) Calling .GetState
	I1205 19:21:39.538986  549077 host.go:66] Checking if "ha-106302" exists ...
	I1205 19:21:39.539332  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:21:39.539371  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:21:39.555429  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40715
	I1205 19:21:39.555999  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:21:39.556560  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:21:39.556589  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:21:39.556932  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:21:39.557156  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:21:39.557335  549077 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302 for IP: 192.168.39.151
	I1205 19:21:39.557356  549077 certs.go:194] generating shared ca certs ...
	I1205 19:21:39.557390  549077 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:21:39.557557  549077 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 19:21:39.557617  549077 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 19:21:39.557630  549077 certs.go:256] generating profile certs ...
	I1205 19:21:39.557734  549077 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key
	I1205 19:21:39.557771  549077 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.2331ea85
	I1205 19:21:39.557795  549077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.2331ea85 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.185 192.168.39.22 192.168.39.151 192.168.39.254]
	I1205 19:21:39.646088  549077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.2331ea85 ...
	I1205 19:21:39.646122  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.2331ea85: {Name:mkca6986931a87aa8d4bcffb8b1ac6412a83db65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:21:39.646289  549077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.2331ea85 ...
	I1205 19:21:39.646301  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.2331ea85: {Name:mke7f657c575646b15413aa5e5525c127a73d588 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:21:39.646374  549077 certs.go:381] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.2331ea85 -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt
	I1205 19:21:39.646516  549077 certs.go:385] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.2331ea85 -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key
	I1205 19:21:39.646682  549077 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key
	I1205 19:21:39.646703  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 19:21:39.646737  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 19:21:39.646758  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 19:21:39.646775  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 19:21:39.646792  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 19:21:39.646808  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 19:21:39.646827  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 19:21:39.660323  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 19:21:39.660454  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 19:21:39.660507  549077 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 19:21:39.660523  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 19:21:39.660561  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 19:21:39.660595  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:21:39.660628  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 19:21:39.660684  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:21:39.660725  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem -> /usr/share/ca-certificates/538186.pem
	I1205 19:21:39.660748  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /usr/share/ca-certificates/5381862.pem
	I1205 19:21:39.660768  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:21:39.660816  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:21:39.664340  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:21:39.664849  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:21:39.664879  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:21:39.665165  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:21:39.665411  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:21:39.665607  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:21:39.665765  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:21:39.748651  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1205 19:21:39.754014  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1205 19:21:39.766062  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1205 19:21:39.771674  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1205 19:21:39.784618  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1205 19:21:39.789041  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1205 19:21:39.802785  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1205 19:21:39.808595  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1205 19:21:39.822597  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1205 19:21:39.827169  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1205 19:21:39.839924  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1205 19:21:39.844630  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1205 19:21:39.865166  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:21:39.890669  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 19:21:39.914805  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:21:39.938866  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 19:21:39.964041  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1205 19:21:39.989973  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 19:21:40.017414  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:21:40.042496  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 19:21:40.067448  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 19:21:40.092444  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 19:21:40.118324  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:21:40.144679  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1205 19:21:40.162124  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1205 19:21:40.178895  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1205 19:21:40.196614  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1205 19:21:40.216743  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1205 19:21:40.236796  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1205 19:21:40.255368  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1205 19:21:40.272767  549077 ssh_runner.go:195] Run: openssl version
	I1205 19:21:40.279013  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 19:21:40.291865  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 19:21:40.297901  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 19:21:40.297969  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 19:21:40.305022  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 19:21:40.317671  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 19:21:40.330059  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 19:21:40.335215  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 19:21:40.335291  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 19:21:40.341648  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 19:21:40.353809  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:21:40.366241  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:21:40.371103  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:21:40.371178  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:21:40.377410  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:21:40.389484  549077 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 19:21:40.394089  549077 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 19:21:40.394159  549077 kubeadm.go:934] updating node {m03 192.168.39.151 8443 v1.31.2 crio true true} ...
	I1205 19:21:40.394281  549077 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-106302-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 19:21:40.394312  549077 kube-vip.go:115] generating kube-vip config ...
	I1205 19:21:40.394383  549077 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 19:21:40.412017  549077 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 19:21:40.412099  549077 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1205 19:21:40.412152  549077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 19:21:40.422903  549077 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1205 19:21:40.422982  549077 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1205 19:21:40.433537  549077 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1205 19:21:40.433551  549077 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1205 19:21:40.433572  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 19:21:40.433606  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:21:40.433603  549077 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1205 19:21:40.433634  549077 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 19:21:40.433638  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 19:21:40.433701  549077 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 19:21:40.452070  549077 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1205 19:21:40.452102  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 19:21:40.452118  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1205 19:21:40.452167  549077 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1205 19:21:40.452196  549077 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 19:21:40.452198  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1205 19:21:40.481457  549077 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1205 19:21:40.481500  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1205 19:21:41.411979  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1205 19:21:41.422976  549077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1205 19:21:41.442199  549077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 19:21:41.460832  549077 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1205 19:21:41.479070  549077 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 19:21:41.483375  549077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:21:41.497066  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:21:41.622952  549077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:21:41.643215  549077 host.go:66] Checking if "ha-106302" exists ...
	I1205 19:21:41.643585  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:21:41.643643  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:21:41.660142  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39403
	I1205 19:21:41.660811  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:21:41.661472  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:21:41.661507  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:21:41.661908  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:21:41.662156  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:21:41.663022  549077 start.go:317] joinCluster: &{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:21:41.663207  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1205 19:21:41.663239  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:21:41.666973  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:21:41.667413  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:21:41.667445  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:21:41.667629  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:21:41.667805  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:21:41.667958  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:21:41.668092  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:21:41.845827  549077 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:21:41.845894  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bitrl5.l9o7pcy69k2x0m8f --discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-106302-m03 --control-plane --apiserver-advertise-address=192.168.39.151 --apiserver-bind-port=8443"
	I1205 19:22:05.091694  549077 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bitrl5.l9o7pcy69k2x0m8f --discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-106302-m03 --control-plane --apiserver-advertise-address=192.168.39.151 --apiserver-bind-port=8443": (23.245742289s)
	I1205 19:22:05.091745  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1205 19:22:05.651069  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-106302-m03 minikube.k8s.io/updated_at=2024_12_05T19_22_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=ha-106302 minikube.k8s.io/primary=false
	I1205 19:22:05.805746  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-106302-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1205 19:22:05.942387  549077 start.go:319] duration metric: took 24.279360239s to joinCluster
	I1205 19:22:05.942527  549077 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:22:05.942909  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:22:05.943936  549077 out.go:177] * Verifying Kubernetes components...
	I1205 19:22:05.945223  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:22:06.284991  549077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:22:06.343812  549077 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:22:06.344263  549077 kapi.go:59] client config for ha-106302: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.crt", KeyFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key", CAFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1205 19:22:06.344398  549077 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.185:8443
	I1205 19:22:06.344797  549077 node_ready.go:35] waiting up to 6m0s for node "ha-106302-m03" to be "Ready" ...
	I1205 19:22:06.344937  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:06.344951  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:06.344962  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:06.344969  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:06.358416  549077 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1205 19:22:06.845609  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:06.845637  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:06.845650  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:06.845657  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:06.850140  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:07.345201  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:07.345229  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:07.345238  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:07.345242  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:07.349137  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:07.845591  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:07.845615  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:07.845624  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:07.845628  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:07.849417  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:08.345109  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:08.345139  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:08.345151  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:08.345155  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:08.349617  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:08.350266  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:08.845598  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:08.845626  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:08.845638  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:08.845643  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:08.849144  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:09.345621  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:09.345646  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:09.345656  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:09.345660  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:09.349983  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:09.845757  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:09.845782  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:09.845790  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:09.845794  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:09.849681  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:10.345604  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:10.345635  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:10.345648  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:10.345654  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:10.349727  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:10.350478  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:10.845342  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:10.845367  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:10.845376  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:10.845381  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:10.848990  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:11.346073  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:11.346097  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:11.346105  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:11.346109  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:11.350613  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:11.845378  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:11.845411  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:11.845426  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:11.845434  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:11.849253  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:12.345303  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:12.345337  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:12.345349  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:12.345358  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:12.352355  549077 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 19:22:12.353182  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:12.845552  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:12.845581  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:12.845591  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:12.845595  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:12.849732  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:13.345587  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:13.345613  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:13.345623  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:13.345629  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:13.349259  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:13.845165  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:13.845197  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:13.845209  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:13.845214  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:13.849815  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:14.345423  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:14.345458  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:14.345471  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:14.345480  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:14.353042  549077 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1205 19:22:14.353960  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:14.845215  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:14.845239  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:14.845248  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:14.845252  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:14.848681  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:15.345651  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:15.345681  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:15.345699  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:15.345706  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:15.349604  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:15.845599  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:15.845627  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:15.845637  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:15.845641  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:15.849736  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:16.345974  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:16.346003  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:16.346012  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:16.346017  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:16.350399  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:16.845026  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:16.845057  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:16.845067  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:16.845071  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:16.848713  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:16.849459  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:17.345612  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:17.345660  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:17.345688  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:17.345700  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:17.349461  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:17.845355  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:17.845379  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:17.845388  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:17.845392  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:17.851232  549077 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 19:22:18.346074  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:18.346098  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:18.346107  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:18.346112  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:18.350327  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:18.845241  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:18.845266  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:18.845273  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:18.845277  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:18.848579  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:18.849652  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:19.345480  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:19.345506  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:19.345515  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:19.345519  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:19.349757  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:19.845572  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:19.845597  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:19.845606  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:19.845621  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:19.849116  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:20.345089  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:20.345113  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:20.345121  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:20.345126  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:20.348890  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:20.846039  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:20.846062  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:20.846070  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:20.846075  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:20.850247  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:20.850972  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:21.345329  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:21.345370  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:21.345381  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:21.345387  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:21.349225  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:21.845571  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:21.845604  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:21.845616  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:21.845622  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:21.849183  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:22.345428  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:22.345453  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:22.345461  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:22.345466  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:22.349371  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:22.845510  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:22.845534  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:22.845543  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:22.845549  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:22.849220  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:23.345442  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:23.345470  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:23.345479  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:23.345484  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:23.349347  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:23.350300  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:23.845549  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:23.845574  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:23.845582  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:23.845587  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:23.849893  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:24.345261  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:24.345292  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:24.345302  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:24.345306  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:24.349136  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:24.845545  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:24.845574  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:24.845583  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:24.845586  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:24.849619  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:25.345655  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:25.345687  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.345745  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.345781  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.349427  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:25.350218  549077 node_ready.go:49] node "ha-106302-m03" has status "Ready":"True"
	I1205 19:22:25.350237  549077 node_ready.go:38] duration metric: took 19.005417749s for node "ha-106302-m03" to be "Ready" ...
	I1205 19:22:25.350247  549077 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:22:25.350324  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:22:25.350335  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.350342  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.350347  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.358969  549077 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1205 19:22:25.365676  549077 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-45m77" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.365768  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-45m77
	I1205 19:22:25.365777  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.365785  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.365790  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.369626  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:25.370252  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:25.370268  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.370276  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.370280  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.373604  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:25.374401  549077 pod_ready.go:93] pod "coredns-7c65d6cfc9-45m77" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:25.374417  549077 pod_ready.go:82] duration metric: took 8.712508ms for pod "coredns-7c65d6cfc9-45m77" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.374426  549077 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sjsv2" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.374491  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sjsv2
	I1205 19:22:25.374498  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.374505  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.374510  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.377314  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:22:25.378099  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:25.378115  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.378125  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.378130  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.380745  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:22:25.381330  549077 pod_ready.go:93] pod "coredns-7c65d6cfc9-sjsv2" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:25.381354  549077 pod_ready.go:82] duration metric: took 6.920357ms for pod "coredns-7c65d6cfc9-sjsv2" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.381366  549077 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.381430  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/etcd-ha-106302
	I1205 19:22:25.381437  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.381445  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.381452  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.384565  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:25.385119  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:25.385140  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.385150  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.385156  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.387832  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:22:25.388313  549077 pod_ready.go:93] pod "etcd-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:25.388334  549077 pod_ready.go:82] duration metric: took 6.95931ms for pod "etcd-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.388344  549077 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.388405  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/etcd-ha-106302-m02
	I1205 19:22:25.388413  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.388420  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.388426  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.390958  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:22:25.391627  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:25.391646  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.391657  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.391664  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.394336  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:22:25.394843  549077 pod_ready.go:93] pod "etcd-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:25.394860  549077 pod_ready.go:82] duration metric: took 6.510348ms for pod "etcd-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.394870  549077 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.546322  549077 request.go:632] Waited for 151.362843ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/etcd-ha-106302-m03
	I1205 19:22:25.546441  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/etcd-ha-106302-m03
	I1205 19:22:25.546457  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.546468  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.546478  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.551505  549077 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 19:22:25.746379  549077 request.go:632] Waited for 194.045637ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:25.746447  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:25.746452  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.746460  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.746465  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.749940  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:25.750364  549077 pod_ready.go:93] pod "etcd-ha-106302-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:25.750384  549077 pod_ready.go:82] duration metric: took 355.50711ms for pod "etcd-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.750410  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.945946  549077 request.go:632] Waited for 195.44547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302
	I1205 19:22:25.946012  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302
	I1205 19:22:25.946017  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.946026  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.946031  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.949896  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:26.146187  549077 request.go:632] Waited for 195.303913ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:26.146261  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:26.146266  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:26.146281  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:26.146284  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:26.150155  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:26.150850  549077 pod_ready.go:93] pod "kube-apiserver-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:26.150872  549077 pod_ready.go:82] duration metric: took 400.452175ms for pod "kube-apiserver-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:26.150884  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:26.346018  549077 request.go:632] Waited for 195.032626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302-m02
	I1205 19:22:26.346106  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302-m02
	I1205 19:22:26.346114  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:26.346126  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:26.346134  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:26.350215  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:26.546617  549077 request.go:632] Waited for 195.375501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:26.546704  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:26.546710  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:26.546718  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:26.546722  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:26.550695  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:26.551267  549077 pod_ready.go:93] pod "kube-apiserver-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:26.551288  549077 pod_ready.go:82] duration metric: took 400.395912ms for pod "kube-apiserver-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:26.551301  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:26.746009  549077 request.go:632] Waited for 194.599498ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302-m03
	I1205 19:22:26.746081  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302-m03
	I1205 19:22:26.746088  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:26.746096  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:26.746102  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:26.750448  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:26.945801  549077 request.go:632] Waited for 194.318273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:26.945876  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:26.945882  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:26.945893  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:26.945901  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:26.949211  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:26.949781  549077 pod_ready.go:93] pod "kube-apiserver-ha-106302-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:26.949807  549077 pod_ready.go:82] duration metric: took 398.493465ms for pod "kube-apiserver-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:26.949821  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:27.145762  549077 request.go:632] Waited for 195.843082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302
	I1205 19:22:27.145841  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302
	I1205 19:22:27.145847  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:27.145856  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:27.145863  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:27.150825  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:27.346689  549077 request.go:632] Waited for 195.243035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:27.346772  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:27.346785  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:27.346804  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:27.346815  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:27.350485  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:27.351090  549077 pod_ready.go:93] pod "kube-controller-manager-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:27.351111  549077 pod_ready.go:82] duration metric: took 401.282274ms for pod "kube-controller-manager-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:27.351122  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:27.546113  549077 request.go:632] Waited for 194.908111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302-m02
	I1205 19:22:27.546216  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302-m02
	I1205 19:22:27.546228  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:27.546241  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:27.546255  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:27.550360  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:27.746526  549077 request.go:632] Waited for 195.360331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:27.746617  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:27.746626  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:27.746635  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:27.746640  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:27.753462  549077 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 19:22:27.754708  549077 pod_ready.go:93] pod "kube-controller-manager-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:27.754735  549077 pod_ready.go:82] duration metric: took 403.601936ms for pod "kube-controller-manager-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:27.754750  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:27.945674  549077 request.go:632] Waited for 190.826423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302-m03
	I1205 19:22:27.945746  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302-m03
	I1205 19:22:27.945752  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:27.945760  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:27.945764  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:27.949668  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:28.146444  549077 request.go:632] Waited for 195.387763ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:28.146510  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:28.146515  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:28.146523  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:28.146535  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:28.150750  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:28.151357  549077 pod_ready.go:93] pod "kube-controller-manager-ha-106302-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:28.151381  549077 pod_ready.go:82] duration metric: took 396.622007ms for pod "kube-controller-manager-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:28.151393  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n57lf" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:28.345948  549077 request.go:632] Waited for 194.471828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n57lf
	I1205 19:22:28.346043  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n57lf
	I1205 19:22:28.346051  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:28.346059  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:28.346064  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:28.350114  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:28.546260  549077 request.go:632] Waited for 195.407825ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:28.546369  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:28.546382  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:28.546394  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:28.546413  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:28.551000  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:28.551628  549077 pod_ready.go:93] pod "kube-proxy-n57lf" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:28.551654  549077 pod_ready.go:82] duration metric: took 400.254319ms for pod "kube-proxy-n57lf" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:28.551666  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pghdx" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:28.746587  549077 request.go:632] Waited for 194.82213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pghdx
	I1205 19:22:28.746705  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pghdx
	I1205 19:22:28.746718  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:28.746727  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:28.746737  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:28.750453  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:28.946581  549077 request.go:632] Waited for 195.373436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:28.946682  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:28.946693  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:28.946704  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:28.946714  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:28.949892  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:28.950341  549077 pod_ready.go:93] pod "kube-proxy-pghdx" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:28.950360  549077 pod_ready.go:82] duration metric: took 398.68655ms for pod "kube-proxy-pghdx" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:28.950370  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zw6nj" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:29.145964  549077 request.go:632] Waited for 195.515335ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zw6nj
	I1205 19:22:29.146035  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zw6nj
	I1205 19:22:29.146042  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:29.146052  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:29.146058  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:29.149161  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:29.346356  549077 request.go:632] Waited for 196.408917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:29.346467  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:29.346475  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:29.346505  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:29.346577  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:29.350334  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:29.351251  549077 pod_ready.go:93] pod "kube-proxy-zw6nj" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:29.351290  549077 pod_ready.go:82] duration metric: took 400.913186ms for pod "kube-proxy-zw6nj" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:29.351307  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:29.545602  549077 request.go:632] Waited for 194.210598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302
	I1205 19:22:29.545674  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302
	I1205 19:22:29.545682  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:29.545694  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:29.545705  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:29.549980  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:29.746034  549077 request.go:632] Waited for 195.473431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:29.746121  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:29.746128  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:29.746140  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:29.746148  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:29.750509  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:29.751460  549077 pod_ready.go:93] pod "kube-scheduler-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:29.751481  549077 pod_ready.go:82] duration metric: took 400.162109ms for pod "kube-scheduler-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:29.751493  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:29.946019  549077 request.go:632] Waited for 194.44438ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302-m02
	I1205 19:22:29.946119  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302-m02
	I1205 19:22:29.946131  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:29.946140  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:29.946148  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:29.949224  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:30.146466  549077 request.go:632] Waited for 196.38785ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:30.146542  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:30.146550  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:30.146562  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:30.146575  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:30.150163  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:30.150654  549077 pod_ready.go:93] pod "kube-scheduler-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:30.150677  549077 pod_ready.go:82] duration metric: took 399.174639ms for pod "kube-scheduler-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:30.150688  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:30.346682  549077 request.go:632] Waited for 195.915039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302-m03
	I1205 19:22:30.346759  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302-m03
	I1205 19:22:30.346764  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:30.346773  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:30.346788  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:30.350596  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:30.545763  549077 request.go:632] Waited for 194.297931ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:30.545847  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:30.545854  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:30.545865  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:30.545873  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:30.549623  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:30.550473  549077 pod_ready.go:93] pod "kube-scheduler-ha-106302-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:30.550494  549077 pod_ready.go:82] duration metric: took 399.800176ms for pod "kube-scheduler-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:30.550505  549077 pod_ready.go:39] duration metric: took 5.200248716s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:22:30.550539  549077 api_server.go:52] waiting for apiserver process to appear ...
	I1205 19:22:30.550598  549077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 19:22:30.565872  549077 api_server.go:72] duration metric: took 24.623303746s to wait for apiserver process to appear ...
	I1205 19:22:30.565908  549077 api_server.go:88] waiting for apiserver healthz status ...
	I1205 19:22:30.565931  549077 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I1205 19:22:30.570332  549077 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I1205 19:22:30.570415  549077 round_trippers.go:463] GET https://192.168.39.185:8443/version
	I1205 19:22:30.570426  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:30.570440  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:30.570444  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:30.571545  549077 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:22:30.571615  549077 api_server.go:141] control plane version: v1.31.2
	I1205 19:22:30.571635  549077 api_server.go:131] duration metric: took 5.719204ms to wait for apiserver health ...
	I1205 19:22:30.571664  549077 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 19:22:30.746133  549077 request.go:632] Waited for 174.37713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:22:30.746217  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:22:30.746231  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:30.746244  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:30.746251  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:30.753131  549077 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 19:22:30.760159  549077 system_pods.go:59] 24 kube-system pods found
	I1205 19:22:30.760194  549077 system_pods.go:61] "coredns-7c65d6cfc9-45m77" [88196078-5292-43dc-84b2-dc53af435e5c] Running
	I1205 19:22:30.760202  549077 system_pods.go:61] "coredns-7c65d6cfc9-sjsv2" [b686cbc5-1b4f-44ea-89cb-70063b687718] Running
	I1205 19:22:30.760208  549077 system_pods.go:61] "etcd-ha-106302" [b0c81234-5186-4812-a1a2-4f035f9efabf] Running
	I1205 19:22:30.760214  549077 system_pods.go:61] "etcd-ha-106302-m02" [8c619411-697a-4eb0-8725-27811a17aba1] Running
	I1205 19:22:30.760219  549077 system_pods.go:61] "etcd-ha-106302-m03" [08e9ef91-8e16-4ff1-a2df-8275e72a5697] Running
	I1205 19:22:30.760224  549077 system_pods.go:61] "kindnet-thcsp" [e2eec41c-3ca9-42ff-801d-dfdf05f6eab2] Running
	I1205 19:22:30.760228  549077 system_pods.go:61] "kindnet-wdsv9" [83d82f5d-42c3-47be-af20-41b82c16b114] Running
	I1205 19:22:30.760233  549077 system_pods.go:61] "kindnet-xr9mh" [2044800c-f517-439e-810b-71a114cb044e] Running
	I1205 19:22:30.760238  549077 system_pods.go:61] "kube-apiserver-ha-106302" [688ddac9-2f42-4e6b-b9e8-a9c967a7180b] Running
	I1205 19:22:30.760243  549077 system_pods.go:61] "kube-apiserver-ha-106302-m02" [ad05d27e-72e0-443e-8ad3-2d464c116f27] Running
	I1205 19:22:30.760249  549077 system_pods.go:61] "kube-apiserver-ha-106302-m03" [398242aa-f015-47ca-9132-23412c52878d] Running
	I1205 19:22:30.760254  549077 system_pods.go:61] "kube-controller-manager-ha-106302" [e63c5a4d-c327-4040-b679-62b5b06abec9] Running
	I1205 19:22:30.760259  549077 system_pods.go:61] "kube-controller-manager-ha-106302-m02" [fe707148-d0c6-4de3-841f-3a8143fa9217] Running
	I1205 19:22:30.760288  549077 system_pods.go:61] "kube-controller-manager-ha-106302-m03" [8af17291-c1b7-417f-a2dd-5a00ca58b07e] Running
	I1205 19:22:30.760294  549077 system_pods.go:61] "kube-proxy-n57lf" [94819792-89fc-4a70-a54f-02e594b657bf] Running
	I1205 19:22:30.760300  549077 system_pods.go:61] "kube-proxy-pghdx" [915060a3-353c-4a2c-a9d6-494206776446] Running
	I1205 19:22:30.760306  549077 system_pods.go:61] "kube-proxy-zw6nj" [d35e1426-9151-4eb3-95fd-c2b36c126b51] Running
	I1205 19:22:30.760312  549077 system_pods.go:61] "kube-scheduler-ha-106302" [6dd32258-0ba3-4f79-8d4b-165b918bbc36] Running
	I1205 19:22:30.760321  549077 system_pods.go:61] "kube-scheduler-ha-106302-m02" [b94b6bf9-4639-47d1-92be-0cbba44e65f3] Running
	I1205 19:22:30.760327  549077 system_pods.go:61] "kube-scheduler-ha-106302-m03" [1b601e0c-59c7-4248-b29c-44d19934f590] Running
	I1205 19:22:30.760333  549077 system_pods.go:61] "kube-vip-ha-106302" [03b99453-c78d-4aaf-93e8-7011ae363db4] Running
	I1205 19:22:30.760339  549077 system_pods.go:61] "kube-vip-ha-106302-m02" [2ec94818-bc15-4d60-95b4-e7f7235f0341] Running
	I1205 19:22:30.760347  549077 system_pods.go:61] "kube-vip-ha-106302-m03" [6e511769-148e-43eb-a4bb-6dd72dfcd11d] Running
	I1205 19:22:30.760352  549077 system_pods.go:61] "storage-provisioner" [88d6e224-b304-4f84-a162-9803400c9acf] Running
	I1205 19:22:30.760361  549077 system_pods.go:74] duration metric: took 188.685514ms to wait for pod list to return data ...
	I1205 19:22:30.760375  549077 default_sa.go:34] waiting for default service account to be created ...
	I1205 19:22:30.946070  549077 request.go:632] Waited for 185.595824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/default/serviceaccounts
	I1205 19:22:30.946137  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/default/serviceaccounts
	I1205 19:22:30.946142  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:30.946151  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:30.946159  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:30.950732  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:30.950901  549077 default_sa.go:45] found service account: "default"
	I1205 19:22:30.950919  549077 default_sa.go:55] duration metric: took 190.53748ms for default service account to be created ...
	I1205 19:22:30.950929  549077 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 19:22:31.146374  549077 request.go:632] Waited for 195.332956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:22:31.146437  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:22:31.146443  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:31.146451  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:31.146456  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:31.153763  549077 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1205 19:22:31.160825  549077 system_pods.go:86] 24 kube-system pods found
	I1205 19:22:31.160858  549077 system_pods.go:89] "coredns-7c65d6cfc9-45m77" [88196078-5292-43dc-84b2-dc53af435e5c] Running
	I1205 19:22:31.160865  549077 system_pods.go:89] "coredns-7c65d6cfc9-sjsv2" [b686cbc5-1b4f-44ea-89cb-70063b687718] Running
	I1205 19:22:31.160869  549077 system_pods.go:89] "etcd-ha-106302" [b0c81234-5186-4812-a1a2-4f035f9efabf] Running
	I1205 19:22:31.160874  549077 system_pods.go:89] "etcd-ha-106302-m02" [8c619411-697a-4eb0-8725-27811a17aba1] Running
	I1205 19:22:31.160878  549077 system_pods.go:89] "etcd-ha-106302-m03" [08e9ef91-8e16-4ff1-a2df-8275e72a5697] Running
	I1205 19:22:31.160882  549077 system_pods.go:89] "kindnet-thcsp" [e2eec41c-3ca9-42ff-801d-dfdf05f6eab2] Running
	I1205 19:22:31.160888  549077 system_pods.go:89] "kindnet-wdsv9" [83d82f5d-42c3-47be-af20-41b82c16b114] Running
	I1205 19:22:31.160893  549077 system_pods.go:89] "kindnet-xr9mh" [2044800c-f517-439e-810b-71a114cb044e] Running
	I1205 19:22:31.160900  549077 system_pods.go:89] "kube-apiserver-ha-106302" [688ddac9-2f42-4e6b-b9e8-a9c967a7180b] Running
	I1205 19:22:31.160908  549077 system_pods.go:89] "kube-apiserver-ha-106302-m02" [ad05d27e-72e0-443e-8ad3-2d464c116f27] Running
	I1205 19:22:31.160914  549077 system_pods.go:89] "kube-apiserver-ha-106302-m03" [398242aa-f015-47ca-9132-23412c52878d] Running
	I1205 19:22:31.160925  549077 system_pods.go:89] "kube-controller-manager-ha-106302" [e63c5a4d-c327-4040-b679-62b5b06abec9] Running
	I1205 19:22:31.160931  549077 system_pods.go:89] "kube-controller-manager-ha-106302-m02" [fe707148-d0c6-4de3-841f-3a8143fa9217] Running
	I1205 19:22:31.160937  549077 system_pods.go:89] "kube-controller-manager-ha-106302-m03" [8af17291-c1b7-417f-a2dd-5a00ca58b07e] Running
	I1205 19:22:31.160946  549077 system_pods.go:89] "kube-proxy-n57lf" [94819792-89fc-4a70-a54f-02e594b657bf] Running
	I1205 19:22:31.160950  549077 system_pods.go:89] "kube-proxy-pghdx" [915060a3-353c-4a2c-a9d6-494206776446] Running
	I1205 19:22:31.160956  549077 system_pods.go:89] "kube-proxy-zw6nj" [d35e1426-9151-4eb3-95fd-c2b36c126b51] Running
	I1205 19:22:31.160960  549077 system_pods.go:89] "kube-scheduler-ha-106302" [6dd32258-0ba3-4f79-8d4b-165b918bbc36] Running
	I1205 19:22:31.160970  549077 system_pods.go:89] "kube-scheduler-ha-106302-m02" [b94b6bf9-4639-47d1-92be-0cbba44e65f3] Running
	I1205 19:22:31.160976  549077 system_pods.go:89] "kube-scheduler-ha-106302-m03" [1b601e0c-59c7-4248-b29c-44d19934f590] Running
	I1205 19:22:31.160979  549077 system_pods.go:89] "kube-vip-ha-106302" [03b99453-c78d-4aaf-93e8-7011ae363db4] Running
	I1205 19:22:31.160985  549077 system_pods.go:89] "kube-vip-ha-106302-m02" [2ec94818-bc15-4d60-95b4-e7f7235f0341] Running
	I1205 19:22:31.160989  549077 system_pods.go:89] "kube-vip-ha-106302-m03" [6e511769-148e-43eb-a4bb-6dd72dfcd11d] Running
	I1205 19:22:31.160992  549077 system_pods.go:89] "storage-provisioner" [88d6e224-b304-4f84-a162-9803400c9acf] Running
	I1205 19:22:31.161001  549077 system_pods.go:126] duration metric: took 210.065272ms to wait for k8s-apps to be running ...
	I1205 19:22:31.161014  549077 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 19:22:31.161075  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:22:31.179416  549077 system_svc.go:56] duration metric: took 18.393613ms WaitForService to wait for kubelet
	I1205 19:22:31.179447  549077 kubeadm.go:582] duration metric: took 25.236889217s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:22:31.179468  549077 node_conditions.go:102] verifying NodePressure condition ...
	I1205 19:22:31.345848  549077 request.go:632] Waited for 166.292279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes
	I1205 19:22:31.345915  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes
	I1205 19:22:31.345920  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:31.345937  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:31.345942  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:31.350337  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:31.351373  549077 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 19:22:31.351397  549077 node_conditions.go:123] node cpu capacity is 2
	I1205 19:22:31.351414  549077 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 19:22:31.351420  549077 node_conditions.go:123] node cpu capacity is 2
	I1205 19:22:31.351426  549077 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 19:22:31.351430  549077 node_conditions.go:123] node cpu capacity is 2
	I1205 19:22:31.351436  549077 node_conditions.go:105] duration metric: took 171.962205ms to run NodePressure ...
	I1205 19:22:31.351452  549077 start.go:241] waiting for startup goroutines ...
	I1205 19:22:31.351479  549077 start.go:255] writing updated cluster config ...
	I1205 19:22:31.351794  549077 ssh_runner.go:195] Run: rm -f paused
	I1205 19:22:31.407206  549077 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 19:22:31.410298  549077 out.go:177] * Done! kubectl is now configured to use "ha-106302" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 19:26:18 ha-106302 crio[666]: time="2024-12-05 19:26:18.774964600Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426778774938735,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6e91b955-7cd8-4ac1-8346-46e430fe1635 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:26:18 ha-106302 crio[666]: time="2024-12-05 19:26:18.775792395Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac77dfba-5706-459a-b198-1faeab645293 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:18 ha-106302 crio[666]: time="2024-12-05 19:26:18.775844867Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac77dfba-5706-459a-b198-1faeab645293 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:18 ha-106302 crio[666]: time="2024-12-05 19:26:18.776057652Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8175779cb574608a2e0e051ddf4963e3b0f7f7b3a0bb6082137a16800a03a08e,PodSandboxId:619925cbc39c69135172b7e76775b358b55fa47d57b5dfe0f03a5194c0692777,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733426557247240128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-p8z47,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16e14c1a-196d-42a8-b245-1a488cb9667f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7af42dff52cf31e3d0b4c5b3bb3039a69b066d99b6f46d065147ba29c75204b,PodSandboxId:95ad32628ed378cf8fe1c9cacc2bc59fc6969dc4a22ed2e11cbc6aa11f389771,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426409026160454,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sjsv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686cbc5-1b4f-44ea-89cb-70063b687718,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71878f2ac51cecfe539f367c2ff49f6bc6b40022a7dff189245bd007d0260d07,PodSandboxId:79783fce24db9824c8762aa0ebc246441d34d9d16f5b46829b9e44cac750e5b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426408724382293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-45m77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
88196078-5292-43dc-84b2-dc53af435e5c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a647561fc8a8150a221a7d9831dde01fe407024d413eda1a607ac294e573764b,PodSandboxId:ba65941872158b7f807f5608fbad458facee98a81f1ec1014ac383579eda3127,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733426408698615726,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d6e224-b304-4f84-a162-9803400c9acf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0e4de270d59927c1fd98dfbfca5bebec8750f72b7682863f1276e5cf4afe0e,PodSandboxId:5f62be7378940215f775ba016eaaba9e085a5bde8d5f3bd2af7af71b2a161ba1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733426396906111541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xr9mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2044800c-f517-439e-810b-71a114cb044e,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013c8063671c4aa3ba3a414d06a2537ce811bcd6e22e028d0ad8ab9af659022d,PodSandboxId:dc8d6361e49728eaa41e23a1d93aa34cfaa625af82fcfa2a884dd3b4f2b81c55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733426392
646389922,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw6nj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d35e1426-9151-4eb3-95fd-c2b36c126b51,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a639bf005af2020a5321599ccc56f99bd4c5be6aa0c227a6310955274ec60e3e,PodSandboxId:3cfec88984b8a0d72e94319ba62e7d4ab919d47ac556a084a2d6737ebd823e2e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173342638480
0708772,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94f9241c16c5e3fb852233a6fe3994b7,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73802addf28ef6b673245e1309d4d82c07c43374f514f1031e2a8277b4641e1a,PodSandboxId:594e9eb586b3236ea16c3700fc2cd0993924c9f7621e0cdde654b8062e9216ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733426381465280845,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44e395bdaa0336ddb64b019178e9d783,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7fcd5f7d56deb9c9698f0941fa3b61d597efc9495ed27488a425d6030baa44,PodSandboxId:c920b14cf50aa8ed9c35f9a67d873d3358f3e00a98649b822dcaf888ea4820e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733426381444138208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6cd909fedaf70356c0cea88a63589f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dec1697264029fa87be97fc70c56ce04eba1e67864a4b1b1f1e47cba052f7cf8,PodSandboxId:411118291d3f33b6d7f7a80f545d0dfdb0f0d3142d4ff4deb2a42c08e68de419,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733426381437294125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7aeab01bb9a2149eedec308e9c9b613,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c251344563e4644b942bcb793dd412b7fae15eefbb4142b68e4047db60a8fbeb,PodSandboxId:890699ae2c7d2cae9c6665fe590a645df186a046d832ec79a134309fabab3c04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733426381376403502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112c68d960b3bd38f8fac52ec570505b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ac77dfba-5706-459a-b198-1faeab645293 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:18 ha-106302 crio[666]: time="2024-12-05 19:26:18.822385009Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b7a111dc-6e1e-4f49-a0cb-d43a9e1609c4 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:26:18 ha-106302 crio[666]: time="2024-12-05 19:26:18.822456443Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b7a111dc-6e1e-4f49-a0cb-d43a9e1609c4 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:26:18 ha-106302 crio[666]: time="2024-12-05 19:26:18.824830896Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ce751762-5fbe-4e51-803b-6685758cfd0c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:26:18 ha-106302 crio[666]: time="2024-12-05 19:26:18.825262502Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426778825240972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce751762-5fbe-4e51-803b-6685758cfd0c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:26:18 ha-106302 crio[666]: time="2024-12-05 19:26:18.826208452Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c1b2b83e-d6bf-4c53-8aca-5887daba7c30 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:18 ha-106302 crio[666]: time="2024-12-05 19:26:18.826280130Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c1b2b83e-d6bf-4c53-8aca-5887daba7c30 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:18 ha-106302 crio[666]: time="2024-12-05 19:26:18.826572158Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8175779cb574608a2e0e051ddf4963e3b0f7f7b3a0bb6082137a16800a03a08e,PodSandboxId:619925cbc39c69135172b7e76775b358b55fa47d57b5dfe0f03a5194c0692777,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733426557247240128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-p8z47,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16e14c1a-196d-42a8-b245-1a488cb9667f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7af42dff52cf31e3d0b4c5b3bb3039a69b066d99b6f46d065147ba29c75204b,PodSandboxId:95ad32628ed378cf8fe1c9cacc2bc59fc6969dc4a22ed2e11cbc6aa11f389771,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426409026160454,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sjsv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686cbc5-1b4f-44ea-89cb-70063b687718,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71878f2ac51cecfe539f367c2ff49f6bc6b40022a7dff189245bd007d0260d07,PodSandboxId:79783fce24db9824c8762aa0ebc246441d34d9d16f5b46829b9e44cac750e5b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426408724382293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-45m77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
88196078-5292-43dc-84b2-dc53af435e5c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a647561fc8a8150a221a7d9831dde01fe407024d413eda1a607ac294e573764b,PodSandboxId:ba65941872158b7f807f5608fbad458facee98a81f1ec1014ac383579eda3127,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733426408698615726,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d6e224-b304-4f84-a162-9803400c9acf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0e4de270d59927c1fd98dfbfca5bebec8750f72b7682863f1276e5cf4afe0e,PodSandboxId:5f62be7378940215f775ba016eaaba9e085a5bde8d5f3bd2af7af71b2a161ba1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733426396906111541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xr9mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2044800c-f517-439e-810b-71a114cb044e,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013c8063671c4aa3ba3a414d06a2537ce811bcd6e22e028d0ad8ab9af659022d,PodSandboxId:dc8d6361e49728eaa41e23a1d93aa34cfaa625af82fcfa2a884dd3b4f2b81c55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733426392
646389922,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw6nj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d35e1426-9151-4eb3-95fd-c2b36c126b51,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a639bf005af2020a5321599ccc56f99bd4c5be6aa0c227a6310955274ec60e3e,PodSandboxId:3cfec88984b8a0d72e94319ba62e7d4ab919d47ac556a084a2d6737ebd823e2e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173342638480
0708772,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94f9241c16c5e3fb852233a6fe3994b7,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73802addf28ef6b673245e1309d4d82c07c43374f514f1031e2a8277b4641e1a,PodSandboxId:594e9eb586b3236ea16c3700fc2cd0993924c9f7621e0cdde654b8062e9216ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733426381465280845,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44e395bdaa0336ddb64b019178e9d783,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7fcd5f7d56deb9c9698f0941fa3b61d597efc9495ed27488a425d6030baa44,PodSandboxId:c920b14cf50aa8ed9c35f9a67d873d3358f3e00a98649b822dcaf888ea4820e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733426381444138208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6cd909fedaf70356c0cea88a63589f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dec1697264029fa87be97fc70c56ce04eba1e67864a4b1b1f1e47cba052f7cf8,PodSandboxId:411118291d3f33b6d7f7a80f545d0dfdb0f0d3142d4ff4deb2a42c08e68de419,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733426381437294125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7aeab01bb9a2149eedec308e9c9b613,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c251344563e4644b942bcb793dd412b7fae15eefbb4142b68e4047db60a8fbeb,PodSandboxId:890699ae2c7d2cae9c6665fe590a645df186a046d832ec79a134309fabab3c04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733426381376403502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112c68d960b3bd38f8fac52ec570505b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c1b2b83e-d6bf-4c53-8aca-5887daba7c30 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:18 ha-106302 crio[666]: time="2024-12-05 19:26:18.869541832Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a430209f-96ce-4545-9781-4e5555d1f5f1 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:26:18 ha-106302 crio[666]: time="2024-12-05 19:26:18.869614925Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a430209f-96ce-4545-9781-4e5555d1f5f1 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:26:18 ha-106302 crio[666]: time="2024-12-05 19:26:18.871137631Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=50e73530-1fca-43c9-8b47-5758ba924c35 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:26:18 ha-106302 crio[666]: time="2024-12-05 19:26:18.871893894Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426778871863486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=50e73530-1fca-43c9-8b47-5758ba924c35 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:26:18 ha-106302 crio[666]: time="2024-12-05 19:26:18.872428505Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c6143992-07bc-46f3-a23f-693a586f980f name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:18 ha-106302 crio[666]: time="2024-12-05 19:26:18.872549413Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c6143992-07bc-46f3-a23f-693a586f980f name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:18 ha-106302 crio[666]: time="2024-12-05 19:26:18.872797534Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8175779cb574608a2e0e051ddf4963e3b0f7f7b3a0bb6082137a16800a03a08e,PodSandboxId:619925cbc39c69135172b7e76775b358b55fa47d57b5dfe0f03a5194c0692777,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733426557247240128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-p8z47,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16e14c1a-196d-42a8-b245-1a488cb9667f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7af42dff52cf31e3d0b4c5b3bb3039a69b066d99b6f46d065147ba29c75204b,PodSandboxId:95ad32628ed378cf8fe1c9cacc2bc59fc6969dc4a22ed2e11cbc6aa11f389771,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426409026160454,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sjsv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686cbc5-1b4f-44ea-89cb-70063b687718,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71878f2ac51cecfe539f367c2ff49f6bc6b40022a7dff189245bd007d0260d07,PodSandboxId:79783fce24db9824c8762aa0ebc246441d34d9d16f5b46829b9e44cac750e5b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426408724382293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-45m77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
88196078-5292-43dc-84b2-dc53af435e5c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a647561fc8a8150a221a7d9831dde01fe407024d413eda1a607ac294e573764b,PodSandboxId:ba65941872158b7f807f5608fbad458facee98a81f1ec1014ac383579eda3127,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733426408698615726,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d6e224-b304-4f84-a162-9803400c9acf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0e4de270d59927c1fd98dfbfca5bebec8750f72b7682863f1276e5cf4afe0e,PodSandboxId:5f62be7378940215f775ba016eaaba9e085a5bde8d5f3bd2af7af71b2a161ba1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733426396906111541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xr9mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2044800c-f517-439e-810b-71a114cb044e,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013c8063671c4aa3ba3a414d06a2537ce811bcd6e22e028d0ad8ab9af659022d,PodSandboxId:dc8d6361e49728eaa41e23a1d93aa34cfaa625af82fcfa2a884dd3b4f2b81c55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733426392
646389922,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw6nj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d35e1426-9151-4eb3-95fd-c2b36c126b51,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a639bf005af2020a5321599ccc56f99bd4c5be6aa0c227a6310955274ec60e3e,PodSandboxId:3cfec88984b8a0d72e94319ba62e7d4ab919d47ac556a084a2d6737ebd823e2e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173342638480
0708772,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94f9241c16c5e3fb852233a6fe3994b7,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73802addf28ef6b673245e1309d4d82c07c43374f514f1031e2a8277b4641e1a,PodSandboxId:594e9eb586b3236ea16c3700fc2cd0993924c9f7621e0cdde654b8062e9216ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733426381465280845,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44e395bdaa0336ddb64b019178e9d783,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7fcd5f7d56deb9c9698f0941fa3b61d597efc9495ed27488a425d6030baa44,PodSandboxId:c920b14cf50aa8ed9c35f9a67d873d3358f3e00a98649b822dcaf888ea4820e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733426381444138208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6cd909fedaf70356c0cea88a63589f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dec1697264029fa87be97fc70c56ce04eba1e67864a4b1b1f1e47cba052f7cf8,PodSandboxId:411118291d3f33b6d7f7a80f545d0dfdb0f0d3142d4ff4deb2a42c08e68de419,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733426381437294125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7aeab01bb9a2149eedec308e9c9b613,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c251344563e4644b942bcb793dd412b7fae15eefbb4142b68e4047db60a8fbeb,PodSandboxId:890699ae2c7d2cae9c6665fe590a645df186a046d832ec79a134309fabab3c04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733426381376403502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112c68d960b3bd38f8fac52ec570505b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c6143992-07bc-46f3-a23f-693a586f980f name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:18 ha-106302 crio[666]: time="2024-12-05 19:26:18.913272914Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=10d0876d-fd0d-4f0e-973f-7e7772679b8b name=/runtime.v1.RuntimeService/Version
	Dec 05 19:26:18 ha-106302 crio[666]: time="2024-12-05 19:26:18.913345454Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=10d0876d-fd0d-4f0e-973f-7e7772679b8b name=/runtime.v1.RuntimeService/Version
	Dec 05 19:26:18 ha-106302 crio[666]: time="2024-12-05 19:26:18.914433525Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=10182cd3-f4f4-418c-910c-fd76893948ef name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:26:18 ha-106302 crio[666]: time="2024-12-05 19:26:18.915110852Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426778915081974,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=10182cd3-f4f4-418c-910c-fd76893948ef name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:26:18 ha-106302 crio[666]: time="2024-12-05 19:26:18.915740451Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=57e23b9d-9e4f-45b8-889d-a7dd1dcbfa29 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:18 ha-106302 crio[666]: time="2024-12-05 19:26:18.915839357Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=57e23b9d-9e4f-45b8-889d-a7dd1dcbfa29 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:18 ha-106302 crio[666]: time="2024-12-05 19:26:18.916144451Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8175779cb574608a2e0e051ddf4963e3b0f7f7b3a0bb6082137a16800a03a08e,PodSandboxId:619925cbc39c69135172b7e76775b358b55fa47d57b5dfe0f03a5194c0692777,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733426557247240128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-p8z47,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16e14c1a-196d-42a8-b245-1a488cb9667f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7af42dff52cf31e3d0b4c5b3bb3039a69b066d99b6f46d065147ba29c75204b,PodSandboxId:95ad32628ed378cf8fe1c9cacc2bc59fc6969dc4a22ed2e11cbc6aa11f389771,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426409026160454,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sjsv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686cbc5-1b4f-44ea-89cb-70063b687718,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71878f2ac51cecfe539f367c2ff49f6bc6b40022a7dff189245bd007d0260d07,PodSandboxId:79783fce24db9824c8762aa0ebc246441d34d9d16f5b46829b9e44cac750e5b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426408724382293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-45m77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
88196078-5292-43dc-84b2-dc53af435e5c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a647561fc8a8150a221a7d9831dde01fe407024d413eda1a607ac294e573764b,PodSandboxId:ba65941872158b7f807f5608fbad458facee98a81f1ec1014ac383579eda3127,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733426408698615726,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d6e224-b304-4f84-a162-9803400c9acf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0e4de270d59927c1fd98dfbfca5bebec8750f72b7682863f1276e5cf4afe0e,PodSandboxId:5f62be7378940215f775ba016eaaba9e085a5bde8d5f3bd2af7af71b2a161ba1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733426396906111541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xr9mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2044800c-f517-439e-810b-71a114cb044e,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013c8063671c4aa3ba3a414d06a2537ce811bcd6e22e028d0ad8ab9af659022d,PodSandboxId:dc8d6361e49728eaa41e23a1d93aa34cfaa625af82fcfa2a884dd3b4f2b81c55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733426392
646389922,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw6nj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d35e1426-9151-4eb3-95fd-c2b36c126b51,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a639bf005af2020a5321599ccc56f99bd4c5be6aa0c227a6310955274ec60e3e,PodSandboxId:3cfec88984b8a0d72e94319ba62e7d4ab919d47ac556a084a2d6737ebd823e2e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173342638480
0708772,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94f9241c16c5e3fb852233a6fe3994b7,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73802addf28ef6b673245e1309d4d82c07c43374f514f1031e2a8277b4641e1a,PodSandboxId:594e9eb586b3236ea16c3700fc2cd0993924c9f7621e0cdde654b8062e9216ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733426381465280845,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44e395bdaa0336ddb64b019178e9d783,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7fcd5f7d56deb9c9698f0941fa3b61d597efc9495ed27488a425d6030baa44,PodSandboxId:c920b14cf50aa8ed9c35f9a67d873d3358f3e00a98649b822dcaf888ea4820e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733426381444138208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6cd909fedaf70356c0cea88a63589f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dec1697264029fa87be97fc70c56ce04eba1e67864a4b1b1f1e47cba052f7cf8,PodSandboxId:411118291d3f33b6d7f7a80f545d0dfdb0f0d3142d4ff4deb2a42c08e68de419,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733426381437294125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7aeab01bb9a2149eedec308e9c9b613,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c251344563e4644b942bcb793dd412b7fae15eefbb4142b68e4047db60a8fbeb,PodSandboxId:890699ae2c7d2cae9c6665fe590a645df186a046d832ec79a134309fabab3c04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733426381376403502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112c68d960b3bd38f8fac52ec570505b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=57e23b9d-9e4f-45b8-889d-a7dd1dcbfa29 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8175779cb5746       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   619925cbc39c6       busybox-7dff88458-p8z47
	d7af42dff52cf       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   95ad32628ed37       coredns-7c65d6cfc9-sjsv2
	71878f2ac51ce       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   79783fce24db9       coredns-7c65d6cfc9-45m77
	a647561fc8a81       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   ba65941872158       storage-provisioner
	8e0e4de270d59       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16    6 minutes ago       Running             kindnet-cni               0                   5f62be7378940       kindnet-xr9mh
	013c8063671c4       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   dc8d6361e4972       kube-proxy-zw6nj
	a639bf005af20       ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e     6 minutes ago       Running             kube-vip                  0                   3cfec88984b8a       kube-vip-ha-106302
	73802addf28ef       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   594e9eb586b32       etcd-ha-106302
	8d7fcd5f7d56d       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   c920b14cf50aa       kube-apiserver-ha-106302
	dec1697264029       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   411118291d3f3       kube-scheduler-ha-106302
	c251344563e46       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   890699ae2c7d2       kube-controller-manager-ha-106302
	
	
	==> coredns [71878f2ac51cecfe539f367c2ff49f6bc6b40022a7dff189245bd007d0260d07] <==
	[INFO] 127.0.0.1:37176 - 32561 "HINFO IN 3495974066793148999.5277118907247610982. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022894865s
	[INFO] 10.244.1.2:51203 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.01735349s
	[INFO] 10.244.2.2:37733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000272502s
	[INFO] 10.244.2.2:53757 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001751263s
	[INFO] 10.244.2.2:54738 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000495007s
	[INFO] 10.244.0.4:45576 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000412263s
	[INFO] 10.244.0.4:48159 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000083837s
	[INFO] 10.244.1.2:34578 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000302061s
	[INFO] 10.244.1.2:54721 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000235254s
	[INFO] 10.244.1.2:43877 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000206178s
	[INFO] 10.244.1.2:35725 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00012413s
	[INFO] 10.244.2.2:53111 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00036507s
	[INFO] 10.244.2.2:60205 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00019223s
	[INFO] 10.244.2.2:49031 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000279282s
	[INFO] 10.244.1.2:48336 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000174589s
	[INFO] 10.244.1.2:47520 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000164259s
	[INFO] 10.244.1.2:58000 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119136s
	[INFO] 10.244.1.2:52602 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000196285s
	[INFO] 10.244.2.2:53065 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143333s
	[INFO] 10.244.0.4:50807 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119749s
	[INFO] 10.244.0.4:60692 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073699s
	[INFO] 10.244.1.2:46283 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000281341s
	[INFO] 10.244.1.2:51750 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000153725s
	[INFO] 10.244.2.2:33715 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000141245s
	[INFO] 10.244.0.4:40497 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000233306s
	
	
	==> coredns [d7af42dff52cf31e3d0b4c5b3bb3039a69b066d99b6f46d065147ba29c75204b] <==
	[INFO] 10.244.2.2:53827 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001485777s
	[INFO] 10.244.2.2:55594 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000308847s
	[INFO] 10.244.2.2:34459 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118477s
	[INFO] 10.244.2.2:39473 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062912s
	[INFO] 10.244.0.4:50797 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000084736s
	[INFO] 10.244.0.4:49715 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001903972s
	[INFO] 10.244.0.4:60150 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000344373s
	[INFO] 10.244.0.4:43238 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000075717s
	[INFO] 10.244.0.4:55133 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001508595s
	[INFO] 10.244.0.4:49161 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071435s
	[INFO] 10.244.0.4:34396 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000048471s
	[INFO] 10.244.0.4:40602 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000037032s
	[INFO] 10.244.2.2:46010 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013718s
	[INFO] 10.244.2.2:59322 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108224s
	[INFO] 10.244.2.2:38750 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000154868s
	[INFO] 10.244.0.4:43291 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123825s
	[INFO] 10.244.0.4:44515 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000163484s
	[INFO] 10.244.1.2:60479 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154514s
	[INFO] 10.244.1.2:42615 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000210654s
	[INFO] 10.244.2.2:57422 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132377s
	[INFO] 10.244.2.2:51037 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00039203s
	[INFO] 10.244.2.2:35850 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000148988s
	[INFO] 10.244.0.4:37661 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000206627s
	[INFO] 10.244.0.4:43810 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000129193s
	[INFO] 10.244.0.4:47355 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000145369s
	
	
	==> describe nodes <==
	Name:               ha-106302
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-106302
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=ha-106302
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T19_19_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 19:19:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-106302
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 19:26:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 19:22:51 +0000   Thu, 05 Dec 2024 19:19:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 19:22:51 +0000   Thu, 05 Dec 2024 19:19:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 19:22:51 +0000   Thu, 05 Dec 2024 19:19:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 19:22:51 +0000   Thu, 05 Dec 2024 19:20:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.185
	  Hostname:    ha-106302
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9fbfe8f29ea445c2a705d4735bab42d9
	  System UUID:                9fbfe8f2-9ea4-45c2-a705-d4735bab42d9
	  Boot ID:                    fbdd1078-6187-4d3e-90aa-6ba60d4d7163
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-p8z47              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m47s
	  kube-system                 coredns-7c65d6cfc9-45m77             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m27s
	  kube-system                 coredns-7c65d6cfc9-sjsv2             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m27s
	  kube-system                 etcd-ha-106302                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m32s
	  kube-system                 kindnet-xr9mh                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m27s
	  kube-system                 kube-apiserver-ha-106302             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m32s
	  kube-system                 kube-controller-manager-ha-106302    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m32s
	  kube-system                 kube-proxy-zw6nj                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-scheduler-ha-106302             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m32s
	  kube-system                 kube-vip-ha-106302                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m26s  kube-proxy       
	  Normal  Starting                 6m32s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m32s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m32s  kubelet          Node ha-106302 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m32s  kubelet          Node ha-106302 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m32s  kubelet          Node ha-106302 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m28s  node-controller  Node ha-106302 event: Registered Node ha-106302 in Controller
	  Normal  NodeReady                6m11s  kubelet          Node ha-106302 status is now: NodeReady
	  Normal  RegisteredNode           5m23s  node-controller  Node ha-106302 event: Registered Node ha-106302 in Controller
	  Normal  RegisteredNode           4m8s   node-controller  Node ha-106302 event: Registered Node ha-106302 in Controller
	
	
	Name:               ha-106302-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-106302-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=ha-106302
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_05T19_20_50_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 19:20:47 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-106302-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 19:23:51 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 05 Dec 2024 19:22:50 +0000   Thu, 05 Dec 2024 19:24:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 05 Dec 2024 19:22:50 +0000   Thu, 05 Dec 2024 19:24:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 05 Dec 2024 19:22:50 +0000   Thu, 05 Dec 2024 19:24:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 05 Dec 2024 19:22:50 +0000   Thu, 05 Dec 2024 19:24:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.22
	  Hostname:    ha-106302-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3ca37a23968d4b139155a7b713c26828
	  System UUID:                3ca37a23-968d-4b13-9155-a7b713c26828
	  Boot ID:                    36db6c69-1ef9-45e9-8548-ed0c2d08168d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9kxtc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m47s
	  kube-system                 etcd-ha-106302-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m30s
	  kube-system                 kindnet-thcsp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m32s
	  kube-system                 kube-apiserver-ha-106302-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-controller-manager-ha-106302-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-proxy-n57lf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m32s
	  kube-system                 kube-scheduler-ha-106302-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-vip-ha-106302-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m27s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m32s (x8 over 5m32s)  kubelet          Node ha-106302-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m32s (x8 over 5m32s)  kubelet          Node ha-106302-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m32s (x7 over 5m32s)  kubelet          Node ha-106302-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m28s                  node-controller  Node ha-106302-m02 event: Registered Node ha-106302-m02 in Controller
	  Normal  RegisteredNode           5m23s                  node-controller  Node ha-106302-m02 event: Registered Node ha-106302-m02 in Controller
	  Normal  RegisteredNode           4m8s                   node-controller  Node ha-106302-m02 event: Registered Node ha-106302-m02 in Controller
	  Normal  NodeNotReady             103s                   node-controller  Node ha-106302-m02 status is now: NodeNotReady
	
	
	Name:               ha-106302-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-106302-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=ha-106302
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_05T19_22_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 19:22:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-106302-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 19:26:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 19:23:03 +0000   Thu, 05 Dec 2024 19:22:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 19:23:03 +0000   Thu, 05 Dec 2024 19:22:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 19:23:03 +0000   Thu, 05 Dec 2024 19:22:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 19:23:03 +0000   Thu, 05 Dec 2024 19:22:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.151
	  Hostname:    ha-106302-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c79436ccca5a4dcb864b64b8f1638e64
	  System UUID:                c79436cc-ca5a-4dcb-864b-64b8f1638e64
	  Boot ID:                    c0d22d1e-5115-47a7-a1b2-4a76f9bfc0f7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9tp62                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m47s
	  kube-system                 etcd-ha-106302-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m15s
	  kube-system                 kindnet-wdsv9                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m17s
	  kube-system                 kube-apiserver-ha-106302-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 kube-controller-manager-ha-106302-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 kube-proxy-pghdx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-scheduler-ha-106302-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-vip-ha-106302-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m12s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m17s (x8 over 4m17s)  kubelet          Node ha-106302-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s (x8 over 4m17s)  kubelet          Node ha-106302-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s (x7 over 4m17s)  kubelet          Node ha-106302-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-106302-m03 event: Registered Node ha-106302-m03 in Controller
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-106302-m03 event: Registered Node ha-106302-m03 in Controller
	  Normal  RegisteredNode           4m8s                   node-controller  Node ha-106302-m03 event: Registered Node ha-106302-m03 in Controller
	
	
	Name:               ha-106302-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-106302-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=ha-106302
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_05T19_23_10_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 19:23:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-106302-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 19:26:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 19:23:41 +0000   Thu, 05 Dec 2024 19:23:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 19:23:41 +0000   Thu, 05 Dec 2024 19:23:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 19:23:41 +0000   Thu, 05 Dec 2024 19:23:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 19:23:41 +0000   Thu, 05 Dec 2024 19:23:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.7
	  Hostname:    ha-106302-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 230adc0a6a8a4784a2711e0f05c0dc5c
	  System UUID:                230adc0a-6a8a-4784-a271-1e0f05c0dc5c
	  Boot ID:                    c550c7a6-b9cf-4484-890e-5c6b9b697be6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4x5qd       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m9s
	  kube-system                 kube-proxy-2dvtn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                   From             Message
	  ----    ------                   ----                  ----             -------
	  Normal  Starting                 3m3s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  3m10s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     3m9s                  cidrAllocator    Node ha-106302-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  3m9s (x2 over 3m10s)  kubelet          Node ha-106302-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m9s (x2 over 3m10s)  kubelet          Node ha-106302-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m9s (x2 over 3m10s)  kubelet          Node ha-106302-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m8s                  node-controller  Node ha-106302-m04 event: Registered Node ha-106302-m04 in Controller
	  Normal  RegisteredNode           3m8s                  node-controller  Node ha-106302-m04 event: Registered Node ha-106302-m04 in Controller
	  Normal  RegisteredNode           3m8s                  node-controller  Node ha-106302-m04 event: Registered Node ha-106302-m04 in Controller
	  Normal  NodeReady                2m48s                 kubelet          Node ha-106302-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 5 19:19] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052678] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040068] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.967635] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.737822] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.642469] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.132933] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.059010] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077817] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.173461] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.135588] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.266467] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +4.207512] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[  +3.975007] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.063464] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.124511] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.093371] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.093366] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.201097] kauditd_printk_skb: 34 callbacks suppressed
	[Dec 5 19:20] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [73802addf28ef6b673245e1309d4d82c07c43374f514f1031e2a8277b4641e1a] <==
	{"level":"warn","ts":"2024-12-05T19:26:19.165215Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:19.186092Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:19.197712Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:19.203344Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:19.216292Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:19.224073Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:19.233714Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:19.237594Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:19.241769Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:19.252833Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:19.260002Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:19.265365Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:19.267395Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:19.274371Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:19.275763Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:19.279736Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:19.287315Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:19.293608Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:19.300606Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:19.304200Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:19.308218Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:19.313580Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:19.320178Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:19.326821Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:19.365185Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:26:19 up 7 min,  0 users,  load average: 0.31, 0.28, 0.14
	Linux ha-106302 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8e0e4de270d59927c1fd98dfbfca5bebec8750f72b7682863f1276e5cf4afe0e] <==
	I1205 19:25:48.038691       1 main.go:324] Node ha-106302-m04 has CIDR [10.244.3.0/24] 
	I1205 19:25:58.032212       1 main.go:297] Handling node with IPs: map[192.168.39.185:{}]
	I1205 19:25:58.032349       1 main.go:301] handling current node
	I1205 19:25:58.032381       1 main.go:297] Handling node with IPs: map[192.168.39.22:{}]
	I1205 19:25:58.032409       1 main.go:324] Node ha-106302-m02 has CIDR [10.244.1.0/24] 
	I1205 19:25:58.032728       1 main.go:297] Handling node with IPs: map[192.168.39.151:{}]
	I1205 19:25:58.032781       1 main.go:324] Node ha-106302-m03 has CIDR [10.244.2.0/24] 
	I1205 19:25:58.032936       1 main.go:297] Handling node with IPs: map[192.168.39.7:{}]
	I1205 19:25:58.032961       1 main.go:324] Node ha-106302-m04 has CIDR [10.244.3.0/24] 
	I1205 19:26:08.033900       1 main.go:297] Handling node with IPs: map[192.168.39.185:{}]
	I1205 19:26:08.033997       1 main.go:301] handling current node
	I1205 19:26:08.034040       1 main.go:297] Handling node with IPs: map[192.168.39.22:{}]
	I1205 19:26:08.034061       1 main.go:324] Node ha-106302-m02 has CIDR [10.244.1.0/24] 
	I1205 19:26:08.034788       1 main.go:297] Handling node with IPs: map[192.168.39.151:{}]
	I1205 19:26:08.034868       1 main.go:324] Node ha-106302-m03 has CIDR [10.244.2.0/24] 
	I1205 19:26:08.035323       1 main.go:297] Handling node with IPs: map[192.168.39.7:{}]
	I1205 19:26:08.036186       1 main.go:324] Node ha-106302-m04 has CIDR [10.244.3.0/24] 
	I1205 19:26:18.031621       1 main.go:297] Handling node with IPs: map[192.168.39.185:{}]
	I1205 19:26:18.031663       1 main.go:301] handling current node
	I1205 19:26:18.031679       1 main.go:297] Handling node with IPs: map[192.168.39.22:{}]
	I1205 19:26:18.031683       1 main.go:324] Node ha-106302-m02 has CIDR [10.244.1.0/24] 
	I1205 19:26:18.031927       1 main.go:297] Handling node with IPs: map[192.168.39.151:{}]
	I1205 19:26:18.031962       1 main.go:324] Node ha-106302-m03 has CIDR [10.244.2.0/24] 
	I1205 19:26:18.032073       1 main.go:297] Handling node with IPs: map[192.168.39.7:{}]
	I1205 19:26:18.032101       1 main.go:324] Node ha-106302-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8d7fcd5f7d56deb9c9698f0941fa3b61d597efc9495ed27488a425d6030baa44] <==
	W1205 19:19:46.101456       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.185]
	I1205 19:19:46.102689       1 controller.go:615] quota admission added evaluator for: endpoints
	I1205 19:19:46.107444       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 19:19:46.330379       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1205 19:19:47.696704       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1205 19:19:47.715088       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1205 19:19:47.729079       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1205 19:19:52.034082       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1205 19:19:52.100936       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1205 19:22:38.001032       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32830: use of closed network connection
	E1205 19:22:38.204236       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32840: use of closed network connection
	E1205 19:22:38.401399       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32852: use of closed network connection
	E1205 19:22:38.650810       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32868: use of closed network connection
	E1205 19:22:38.848239       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32882: use of closed network connection
	E1205 19:22:39.039033       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32892: use of closed network connection
	E1205 19:22:39.233185       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32904: use of closed network connection
	E1205 19:22:39.423024       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32930: use of closed network connection
	E1205 19:22:39.623335       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32946: use of closed network connection
	E1205 19:22:39.929919       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32972: use of closed network connection
	E1205 19:22:40.109732       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32994: use of closed network connection
	E1205 19:22:40.313792       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33004: use of closed network connection
	E1205 19:22:40.512273       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33032: use of closed network connection
	E1205 19:22:40.696838       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33064: use of closed network connection
	E1205 19:22:40.891466       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33092: use of closed network connection
	W1205 19:23:56.103047       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.151 192.168.39.185]
	
	
	==> kube-controller-manager [c251344563e4644b942bcb793dd412b7fae15eefbb4142b68e4047db60a8fbeb] <==
	I1205 19:22:37.515258       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="61.952µs"
	I1205 19:22:50.027185       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m02"
	I1205 19:22:51.994933       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302"
	I1205 19:23:03.348987       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m03"
	I1205 19:23:10.074709       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-106302-m04\" does not exist"
	I1205 19:23:10.130455       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-106302-m04" podCIDRs=["10.244.3.0/24"]
	I1205 19:23:10.130559       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:10.130592       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:10.405830       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:10.799985       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:11.200921       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-106302-m04"
	I1205 19:23:11.286372       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:20.510971       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:31.164993       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:31.165813       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-106302-m04"
	I1205 19:23:31.181172       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:31.224422       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:41.047269       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:24:36.318018       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m02"
	I1205 19:24:36.318367       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-106302-m04"
	I1205 19:24:36.348027       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m02"
	I1205 19:24:36.462551       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="17.68033ms"
	I1205 19:24:36.463140       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="102.944µs"
	I1205 19:24:36.509355       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m02"
	I1205 19:24:41.525728       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m02"
	
	
	==> kube-proxy [013c8063671c4aa3ba3a414d06a2537ce811bcd6e22e028d0ad8ab9af659022d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 19:19:53.137314       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1205 19:19:53.171420       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.185"]
	E1205 19:19:53.171824       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 19:19:53.214655       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 19:19:53.214741       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 19:19:53.214788       1 server_linux.go:169] "Using iptables Proxier"
	I1205 19:19:53.217916       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 19:19:53.218705       1 server.go:483] "Version info" version="v1.31.2"
	I1205 19:19:53.218777       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 19:19:53.220962       1 config.go:199] "Starting service config controller"
	I1205 19:19:53.221650       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 19:19:53.221992       1 config.go:105] "Starting endpoint slice config controller"
	I1205 19:19:53.222064       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 19:19:53.223609       1 config.go:328] "Starting node config controller"
	I1205 19:19:53.226006       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 19:19:53.322722       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 19:19:53.322841       1 shared_informer.go:320] Caches are synced for service config
	I1205 19:19:53.326785       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [dec1697264029fa87be97fc70c56ce04eba1e67864a4b1b1f1e47cba052f7cf8] <==
	W1205 19:19:45.698374       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 19:19:45.698482       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 19:19:45.740149       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 19:19:45.740541       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1205 19:19:48.195246       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1205 19:22:02.375222       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tpm2m\": pod kube-proxy-tpm2m is already assigned to node \"ha-106302-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tpm2m" node="ha-106302-m03"
	E1205 19:22:02.375416       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 1976f453-f240-48ff-bcac-37351800ac58(kube-system/kube-proxy-tpm2m) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-tpm2m"
	E1205 19:22:02.375449       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tpm2m\": pod kube-proxy-tpm2m is already assigned to node \"ha-106302-m03\"" pod="kube-system/kube-proxy-tpm2m"
	I1205 19:22:02.375580       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-tpm2m" node="ha-106302-m03"
	E1205 19:22:02.382616       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-wdsv9\": pod kindnet-wdsv9 is already assigned to node \"ha-106302-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-wdsv9" node="ha-106302-m03"
	E1205 19:22:02.382763       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 83d82f5d-42c3-47be-af20-41b82c16b114(kube-system/kindnet-wdsv9) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-wdsv9"
	E1205 19:22:02.382784       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-wdsv9\": pod kindnet-wdsv9 is already assigned to node \"ha-106302-m03\"" pod="kube-system/kindnet-wdsv9"
	I1205 19:22:02.382811       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-wdsv9" node="ha-106302-m03"
	E1205 19:22:02.429049       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-pghdx\": pod kube-proxy-pghdx is already assigned to node \"ha-106302-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-pghdx" node="ha-106302-m03"
	E1205 19:22:02.429116       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 915060a3-353c-4a2c-a9d6-494206776446(kube-system/kube-proxy-pghdx) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-pghdx"
	E1205 19:22:02.429132       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-pghdx\": pod kube-proxy-pghdx is already assigned to node \"ha-106302-m03\"" pod="kube-system/kube-proxy-pghdx"
	I1205 19:22:02.429156       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-pghdx" node="ha-106302-m03"
	E1205 19:22:32.450165       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-p8z47\": pod busybox-7dff88458-p8z47 is already assigned to node \"ha-106302\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-p8z47" node="ha-106302"
	E1205 19:22:32.450464       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 16e14c1a-196d-42a8-b245-1a488cb9667f(default/busybox-7dff88458-p8z47) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-p8z47"
	E1205 19:22:32.450610       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-p8z47\": pod busybox-7dff88458-p8z47 is already assigned to node \"ha-106302\"" pod="default/busybox-7dff88458-p8z47"
	I1205 19:22:32.450729       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-p8z47" node="ha-106302"
	E1205 19:22:32.450776       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-9tp62\": pod busybox-7dff88458-9tp62 is already assigned to node \"ha-106302-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-9tp62" node="ha-106302-m03"
	E1205 19:22:32.459571       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod afb0c778-acb1-4db0-b0b6-f054049d0a9d(default/busybox-7dff88458-9tp62) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-9tp62"
	E1205 19:22:32.460188       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-9tp62\": pod busybox-7dff88458-9tp62 is already assigned to node \"ha-106302-m03\"" pod="default/busybox-7dff88458-9tp62"
	I1205 19:22:32.460282       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-9tp62" node="ha-106302-m03"
	
	
	==> kubelet <==
	Dec 05 19:24:47 ha-106302 kubelet[1308]: E1205 19:24:47.778614    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426687778175124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:24:47 ha-106302 kubelet[1308]: E1205 19:24:47.778767    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426687778175124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:24:57 ha-106302 kubelet[1308]: E1205 19:24:57.781563    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426697781244346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:24:57 ha-106302 kubelet[1308]: E1205 19:24:57.781621    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426697781244346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:07 ha-106302 kubelet[1308]: E1205 19:25:07.783663    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426707783267296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:07 ha-106302 kubelet[1308]: E1205 19:25:07.783686    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426707783267296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:17 ha-106302 kubelet[1308]: E1205 19:25:17.787301    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426717786088822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:17 ha-106302 kubelet[1308]: E1205 19:25:17.788092    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426717786088822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:27 ha-106302 kubelet[1308]: E1205 19:25:27.791254    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426727789306197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:27 ha-106302 kubelet[1308]: E1205 19:25:27.792185    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426727789306197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:37 ha-106302 kubelet[1308]: E1205 19:25:37.793643    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426737793262536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:37 ha-106302 kubelet[1308]: E1205 19:25:37.793688    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426737793262536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:47 ha-106302 kubelet[1308]: E1205 19:25:47.685793    1308 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 05 19:25:47 ha-106302 kubelet[1308]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 05 19:25:47 ha-106302 kubelet[1308]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 19:25:47 ha-106302 kubelet[1308]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 19:25:47 ha-106302 kubelet[1308]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 19:25:47 ha-106302 kubelet[1308]: E1205 19:25:47.795235    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426747794906816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:47 ha-106302 kubelet[1308]: E1205 19:25:47.795258    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426747794906816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:57 ha-106302 kubelet[1308]: E1205 19:25:57.797302    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426757796435936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:57 ha-106302 kubelet[1308]: E1205 19:25:57.798201    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426757796435936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:26:07 ha-106302 kubelet[1308]: E1205 19:26:07.800104    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426767799828720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:26:07 ha-106302 kubelet[1308]: E1205 19:26:07.800714    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426767799828720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:26:17 ha-106302 kubelet[1308]: E1205 19:26:17.806169    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426777803286232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:26:17 ha-106302 kubelet[1308]: E1205 19:26:17.806235    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426777803286232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-106302 -n ha-106302
helpers_test.go:261: (dbg) Run:  kubectl --context ha-106302 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 node start m02 -v=7 --alsologtostderr
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-106302 status -v=7 --alsologtostderr: (4.039391323s)
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-106302 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-106302 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-106302 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-106302 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-106302 -n ha-106302
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-106302 logs -n 25: (1.516985717s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                      |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m03:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302:/home/docker/cp-test_ha-106302-m03_ha-106302.txt                     |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302 sudo cat                                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m03_ha-106302.txt                               |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m03:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m02:/home/docker/cp-test_ha-106302-m03_ha-106302-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302-m02 sudo cat                                        | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m03_ha-106302-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m03:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04:/home/docker/cp-test_ha-106302-m03_ha-106302-m04.txt             |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302-m04 sudo cat                                        | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m03_ha-106302-m04.txt                           |           |         |         |                     |                     |
	| cp      | ha-106302 cp testdata/cp-test.txt                                              | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04:/home/docker/cp-test.txt                                         |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile42720673/001/cp-test_ha-106302-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302:/home/docker/cp-test_ha-106302-m04_ha-106302.txt                     |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302 sudo cat                                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m04_ha-106302.txt                               |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m02:/home/docker/cp-test_ha-106302-m04_ha-106302-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302-m02 sudo cat                                        | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m04_ha-106302-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m03:/home/docker/cp-test_ha-106302-m04_ha-106302-m03.txt             |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302-m03 sudo cat                                        | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m04_ha-106302-m03.txt                           |           |         |         |                     |                     |
	| node    | ha-106302 node stop m02 -v=7                                                   | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | ha-106302 node start m02 -v=7                                                  | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:26 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 19:19:05
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:19:05.666020  549077 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:19:05.666172  549077 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:19:05.666182  549077 out.go:358] Setting ErrFile to fd 2...
	I1205 19:19:05.666187  549077 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:19:05.666372  549077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 19:19:05.666982  549077 out.go:352] Setting JSON to false
	I1205 19:19:05.667993  549077 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":7292,"bootTime":1733419054,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:19:05.668118  549077 start.go:139] virtualization: kvm guest
	I1205 19:19:05.670258  549077 out.go:177] * [ha-106302] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:19:05.672244  549077 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 19:19:05.672310  549077 notify.go:220] Checking for updates...
	I1205 19:19:05.674836  549077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:19:05.676311  549077 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:19:05.677586  549077 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:19:05.678906  549077 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:19:05.680179  549077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:19:05.681501  549077 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 19:19:05.716520  549077 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 19:19:05.718361  549077 start.go:297] selected driver: kvm2
	I1205 19:19:05.718375  549077 start.go:901] validating driver "kvm2" against <nil>
	I1205 19:19:05.718387  549077 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:19:05.719138  549077 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:19:05.719217  549077 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20052-530897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 19:19:05.734721  549077 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 19:19:05.734777  549077 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 19:19:05.735145  549077 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:19:05.735198  549077 cni.go:84] Creating CNI manager for ""
	I1205 19:19:05.735258  549077 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1205 19:19:05.735271  549077 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 19:19:05.735352  549077 start.go:340] cluster config:
	{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1205 19:19:05.735498  549077 iso.go:125] acquiring lock: {Name:mk778929df466edaca8cb6d38427acedfae32b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:19:05.737389  549077 out.go:177] * Starting "ha-106302" primary control-plane node in "ha-106302" cluster
	I1205 19:19:05.738520  549077 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:19:05.738565  549077 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 19:19:05.738579  549077 cache.go:56] Caching tarball of preloaded images
	I1205 19:19:05.738663  549077 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 19:19:05.738678  549077 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 19:19:05.739034  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:19:05.739058  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json: {Name:mk36f887968924e3b867abb3b152df7882583b36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:05.739210  549077 start.go:360] acquireMachinesLock for ha-106302: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 19:19:05.739241  549077 start.go:364] duration metric: took 16.973µs to acquireMachinesLock for "ha-106302"
	I1205 19:19:05.739258  549077 start.go:93] Provisioning new machine with config: &{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:19:05.739311  549077 start.go:125] createHost starting for "" (driver="kvm2")
	I1205 19:19:05.740876  549077 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 19:19:05.741018  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:19:05.741056  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:19:05.755320  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35555
	I1205 19:19:05.755768  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:19:05.756364  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:19:05.756386  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:19:05.756720  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:19:05.756918  549077 main.go:141] libmachine: (ha-106302) Calling .GetMachineName
	I1205 19:19:05.757058  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:05.757247  549077 start.go:159] libmachine.API.Create for "ha-106302" (driver="kvm2")
	I1205 19:19:05.757287  549077 client.go:168] LocalClient.Create starting
	I1205 19:19:05.757338  549077 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem
	I1205 19:19:05.757377  549077 main.go:141] libmachine: Decoding PEM data...
	I1205 19:19:05.757396  549077 main.go:141] libmachine: Parsing certificate...
	I1205 19:19:05.757476  549077 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem
	I1205 19:19:05.757503  549077 main.go:141] libmachine: Decoding PEM data...
	I1205 19:19:05.757522  549077 main.go:141] libmachine: Parsing certificate...
	I1205 19:19:05.757549  549077 main.go:141] libmachine: Running pre-create checks...
	I1205 19:19:05.757567  549077 main.go:141] libmachine: (ha-106302) Calling .PreCreateCheck
	I1205 19:19:05.757886  549077 main.go:141] libmachine: (ha-106302) Calling .GetConfigRaw
	I1205 19:19:05.758310  549077 main.go:141] libmachine: Creating machine...
	I1205 19:19:05.758325  549077 main.go:141] libmachine: (ha-106302) Calling .Create
	I1205 19:19:05.758443  549077 main.go:141] libmachine: (ha-106302) Creating KVM machine...
	I1205 19:19:05.759563  549077 main.go:141] libmachine: (ha-106302) DBG | found existing default KVM network
	I1205 19:19:05.760292  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:05.760130  549100 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231f0}
	I1205 19:19:05.760373  549077 main.go:141] libmachine: (ha-106302) DBG | created network xml: 
	I1205 19:19:05.760394  549077 main.go:141] libmachine: (ha-106302) DBG | <network>
	I1205 19:19:05.760405  549077 main.go:141] libmachine: (ha-106302) DBG |   <name>mk-ha-106302</name>
	I1205 19:19:05.760417  549077 main.go:141] libmachine: (ha-106302) DBG |   <dns enable='no'/>
	I1205 19:19:05.760428  549077 main.go:141] libmachine: (ha-106302) DBG |   
	I1205 19:19:05.760437  549077 main.go:141] libmachine: (ha-106302) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1205 19:19:05.760450  549077 main.go:141] libmachine: (ha-106302) DBG |     <dhcp>
	I1205 19:19:05.760460  549077 main.go:141] libmachine: (ha-106302) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1205 19:19:05.760472  549077 main.go:141] libmachine: (ha-106302) DBG |     </dhcp>
	I1205 19:19:05.760488  549077 main.go:141] libmachine: (ha-106302) DBG |   </ip>
	I1205 19:19:05.760499  549077 main.go:141] libmachine: (ha-106302) DBG |   
	I1205 19:19:05.760507  549077 main.go:141] libmachine: (ha-106302) DBG | </network>
	I1205 19:19:05.760517  549077 main.go:141] libmachine: (ha-106302) DBG | 
	I1205 19:19:05.765547  549077 main.go:141] libmachine: (ha-106302) DBG | trying to create private KVM network mk-ha-106302 192.168.39.0/24...
	I1205 19:19:05.832912  549077 main.go:141] libmachine: (ha-106302) DBG | private KVM network mk-ha-106302 192.168.39.0/24 created
	I1205 19:19:05.832950  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:05.832854  549100 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:19:05.832976  549077 main.go:141] libmachine: (ha-106302) Setting up store path in /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302 ...
	I1205 19:19:05.832995  549077 main.go:141] libmachine: (ha-106302) Building disk image from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 19:19:05.833015  549077 main.go:141] libmachine: (ha-106302) Downloading /home/jenkins/minikube-integration/20052-530897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 19:19:06.116114  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:06.115928  549100 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa...
	I1205 19:19:06.195132  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:06.194945  549100 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/ha-106302.rawdisk...
	I1205 19:19:06.195166  549077 main.go:141] libmachine: (ha-106302) DBG | Writing magic tar header
	I1205 19:19:06.195176  549077 main.go:141] libmachine: (ha-106302) DBG | Writing SSH key tar header
	I1205 19:19:06.195183  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:06.195098  549100 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302 ...
	I1205 19:19:06.195194  549077 main.go:141] libmachine: (ha-106302) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302
	I1205 19:19:06.195272  549077 main.go:141] libmachine: (ha-106302) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302 (perms=drwx------)
	I1205 19:19:06.195294  549077 main.go:141] libmachine: (ha-106302) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines (perms=drwxr-xr-x)
	I1205 19:19:06.195305  549077 main.go:141] libmachine: (ha-106302) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines
	I1205 19:19:06.195321  549077 main.go:141] libmachine: (ha-106302) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:19:06.195332  549077 main.go:141] libmachine: (ha-106302) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube (perms=drwxr-xr-x)
	I1205 19:19:06.195340  549077 main.go:141] libmachine: (ha-106302) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897
	I1205 19:19:06.195349  549077 main.go:141] libmachine: (ha-106302) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 19:19:06.195354  549077 main.go:141] libmachine: (ha-106302) DBG | Checking permissions on dir: /home/jenkins
	I1205 19:19:06.195360  549077 main.go:141] libmachine: (ha-106302) DBG | Checking permissions on dir: /home
	I1205 19:19:06.195379  549077 main.go:141] libmachine: (ha-106302) DBG | Skipping /home - not owner
	I1205 19:19:06.195390  549077 main.go:141] libmachine: (ha-106302) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897 (perms=drwxrwxr-x)
	I1205 19:19:06.195397  549077 main.go:141] libmachine: (ha-106302) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 19:19:06.195403  549077 main.go:141] libmachine: (ha-106302) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 19:19:06.195409  549077 main.go:141] libmachine: (ha-106302) Creating domain...
	I1205 19:19:06.196529  549077 main.go:141] libmachine: (ha-106302) define libvirt domain using xml: 
	I1205 19:19:06.196544  549077 main.go:141] libmachine: (ha-106302) <domain type='kvm'>
	I1205 19:19:06.196550  549077 main.go:141] libmachine: (ha-106302)   <name>ha-106302</name>
	I1205 19:19:06.196561  549077 main.go:141] libmachine: (ha-106302)   <memory unit='MiB'>2200</memory>
	I1205 19:19:06.196569  549077 main.go:141] libmachine: (ha-106302)   <vcpu>2</vcpu>
	I1205 19:19:06.196578  549077 main.go:141] libmachine: (ha-106302)   <features>
	I1205 19:19:06.196586  549077 main.go:141] libmachine: (ha-106302)     <acpi/>
	I1205 19:19:06.196595  549077 main.go:141] libmachine: (ha-106302)     <apic/>
	I1205 19:19:06.196603  549077 main.go:141] libmachine: (ha-106302)     <pae/>
	I1205 19:19:06.196621  549077 main.go:141] libmachine: (ha-106302)     
	I1205 19:19:06.196632  549077 main.go:141] libmachine: (ha-106302)   </features>
	I1205 19:19:06.196643  549077 main.go:141] libmachine: (ha-106302)   <cpu mode='host-passthrough'>
	I1205 19:19:06.196652  549077 main.go:141] libmachine: (ha-106302)   
	I1205 19:19:06.196658  549077 main.go:141] libmachine: (ha-106302)   </cpu>
	I1205 19:19:06.196670  549077 main.go:141] libmachine: (ha-106302)   <os>
	I1205 19:19:06.196677  549077 main.go:141] libmachine: (ha-106302)     <type>hvm</type>
	I1205 19:19:06.196689  549077 main.go:141] libmachine: (ha-106302)     <boot dev='cdrom'/>
	I1205 19:19:06.196704  549077 main.go:141] libmachine: (ha-106302)     <boot dev='hd'/>
	I1205 19:19:06.196715  549077 main.go:141] libmachine: (ha-106302)     <bootmenu enable='no'/>
	I1205 19:19:06.196724  549077 main.go:141] libmachine: (ha-106302)   </os>
	I1205 19:19:06.196732  549077 main.go:141] libmachine: (ha-106302)   <devices>
	I1205 19:19:06.196743  549077 main.go:141] libmachine: (ha-106302)     <disk type='file' device='cdrom'>
	I1205 19:19:06.196758  549077 main.go:141] libmachine: (ha-106302)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/boot2docker.iso'/>
	I1205 19:19:06.196769  549077 main.go:141] libmachine: (ha-106302)       <target dev='hdc' bus='scsi'/>
	I1205 19:19:06.196777  549077 main.go:141] libmachine: (ha-106302)       <readonly/>
	I1205 19:19:06.196783  549077 main.go:141] libmachine: (ha-106302)     </disk>
	I1205 19:19:06.196795  549077 main.go:141] libmachine: (ha-106302)     <disk type='file' device='disk'>
	I1205 19:19:06.196806  549077 main.go:141] libmachine: (ha-106302)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 19:19:06.196821  549077 main.go:141] libmachine: (ha-106302)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/ha-106302.rawdisk'/>
	I1205 19:19:06.196833  549077 main.go:141] libmachine: (ha-106302)       <target dev='hda' bus='virtio'/>
	I1205 19:19:06.196842  549077 main.go:141] libmachine: (ha-106302)     </disk>
	I1205 19:19:06.196851  549077 main.go:141] libmachine: (ha-106302)     <interface type='network'>
	I1205 19:19:06.196861  549077 main.go:141] libmachine: (ha-106302)       <source network='mk-ha-106302'/>
	I1205 19:19:06.196873  549077 main.go:141] libmachine: (ha-106302)       <model type='virtio'/>
	I1205 19:19:06.196896  549077 main.go:141] libmachine: (ha-106302)     </interface>
	I1205 19:19:06.196909  549077 main.go:141] libmachine: (ha-106302)     <interface type='network'>
	I1205 19:19:06.196919  549077 main.go:141] libmachine: (ha-106302)       <source network='default'/>
	I1205 19:19:06.196927  549077 main.go:141] libmachine: (ha-106302)       <model type='virtio'/>
	I1205 19:19:06.196936  549077 main.go:141] libmachine: (ha-106302)     </interface>
	I1205 19:19:06.196944  549077 main.go:141] libmachine: (ha-106302)     <serial type='pty'>
	I1205 19:19:06.196953  549077 main.go:141] libmachine: (ha-106302)       <target port='0'/>
	I1205 19:19:06.196962  549077 main.go:141] libmachine: (ha-106302)     </serial>
	I1205 19:19:06.196975  549077 main.go:141] libmachine: (ha-106302)     <console type='pty'>
	I1205 19:19:06.196984  549077 main.go:141] libmachine: (ha-106302)       <target type='serial' port='0'/>
	I1205 19:19:06.196996  549077 main.go:141] libmachine: (ha-106302)     </console>
	I1205 19:19:06.197007  549077 main.go:141] libmachine: (ha-106302)     <rng model='virtio'>
	I1205 19:19:06.197017  549077 main.go:141] libmachine: (ha-106302)       <backend model='random'>/dev/random</backend>
	I1205 19:19:06.197028  549077 main.go:141] libmachine: (ha-106302)     </rng>
	I1205 19:19:06.197036  549077 main.go:141] libmachine: (ha-106302)     
	I1205 19:19:06.197055  549077 main.go:141] libmachine: (ha-106302)     
	I1205 19:19:06.197068  549077 main.go:141] libmachine: (ha-106302)   </devices>
	I1205 19:19:06.197073  549077 main.go:141] libmachine: (ha-106302) </domain>
	I1205 19:19:06.197078  549077 main.go:141] libmachine: (ha-106302) 
	I1205 19:19:06.202279  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:71:9c:4d in network default
	I1205 19:19:06.203034  549077 main.go:141] libmachine: (ha-106302) Ensuring networks are active...
	I1205 19:19:06.203055  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:06.203739  549077 main.go:141] libmachine: (ha-106302) Ensuring network default is active
	I1205 19:19:06.204123  549077 main.go:141] libmachine: (ha-106302) Ensuring network mk-ha-106302 is active
	I1205 19:19:06.204705  549077 main.go:141] libmachine: (ha-106302) Getting domain xml...
	I1205 19:19:06.205494  549077 main.go:141] libmachine: (ha-106302) Creating domain...
	I1205 19:19:07.414905  549077 main.go:141] libmachine: (ha-106302) Waiting to get IP...
	I1205 19:19:07.415701  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:07.416131  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:07.416172  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:07.416110  549100 retry.go:31] will retry after 254.984492ms: waiting for machine to come up
	I1205 19:19:07.672644  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:07.673096  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:07.673126  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:07.673025  549100 retry.go:31] will retry after 337.308268ms: waiting for machine to come up
	I1205 19:19:08.011677  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:08.012131  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:08.012153  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:08.012097  549100 retry.go:31] will retry after 331.381496ms: waiting for machine to come up
	I1205 19:19:08.344830  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:08.345286  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:08.345315  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:08.345230  549100 retry.go:31] will retry after 526.921251ms: waiting for machine to come up
	I1205 19:19:08.874020  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:08.874426  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:08.874457  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:08.874366  549100 retry.go:31] will retry after 677.76743ms: waiting for machine to come up
	I1205 19:19:09.554490  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:09.555045  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:09.555078  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:09.554953  549100 retry.go:31] will retry after 810.208397ms: waiting for machine to come up
	I1205 19:19:10.367000  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:10.367429  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:10.367463  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:10.367397  549100 retry.go:31] will retry after 1.115748222s: waiting for machine to come up
	I1205 19:19:11.484531  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:11.485067  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:11.485098  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:11.485008  549100 retry.go:31] will retry after 1.3235703s: waiting for machine to come up
	I1205 19:19:12.810602  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:12.810991  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:12.811014  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:12.810945  549100 retry.go:31] will retry after 1.831554324s: waiting for machine to come up
	I1205 19:19:14.645035  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:14.645488  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:14.645513  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:14.645439  549100 retry.go:31] will retry after 1.712987373s: waiting for machine to come up
	I1205 19:19:16.360441  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:16.361053  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:16.361095  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:16.360964  549100 retry.go:31] will retry after 1.757836043s: waiting for machine to come up
	I1205 19:19:18.120905  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:18.121462  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:18.121490  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:18.121398  549100 retry.go:31] will retry after 2.555295546s: waiting for machine to come up
	I1205 19:19:20.680255  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:20.680831  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:20.680857  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:20.680783  549100 retry.go:31] will retry after 3.433196303s: waiting for machine to come up
	I1205 19:19:24.117782  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:24.118200  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:24.118225  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:24.118165  549100 retry.go:31] will retry after 5.333530854s: waiting for machine to come up
	I1205 19:19:29.456371  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.456820  549077 main.go:141] libmachine: (ha-106302) Found IP for machine: 192.168.39.185
	I1205 19:19:29.456837  549077 main.go:141] libmachine: (ha-106302) Reserving static IP address...
	I1205 19:19:29.456845  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has current primary IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.457259  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find host DHCP lease matching {name: "ha-106302", mac: "52:54:00:3b:e4:76", ip: "192.168.39.185"} in network mk-ha-106302
	I1205 19:19:29.532847  549077 main.go:141] libmachine: (ha-106302) DBG | Getting to WaitForSSH function...
	I1205 19:19:29.532882  549077 main.go:141] libmachine: (ha-106302) Reserved static IP address: 192.168.39.185
	I1205 19:19:29.532895  549077 main.go:141] libmachine: (ha-106302) Waiting for SSH to be available...
	I1205 19:19:29.535405  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.536081  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:29.536388  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.536771  549077 main.go:141] libmachine: (ha-106302) DBG | Using SSH client type: external
	I1205 19:19:29.536915  549077 main.go:141] libmachine: (ha-106302) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa (-rw-------)
	I1205 19:19:29.536944  549077 main.go:141] libmachine: (ha-106302) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.185 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 19:19:29.536962  549077 main.go:141] libmachine: (ha-106302) DBG | About to run SSH command:
	I1205 19:19:29.536972  549077 main.go:141] libmachine: (ha-106302) DBG | exit 0
	I1205 19:19:29.664869  549077 main.go:141] libmachine: (ha-106302) DBG | SSH cmd err, output: <nil>: 
	I1205 19:19:29.665141  549077 main.go:141] libmachine: (ha-106302) KVM machine creation complete!
	I1205 19:19:29.665477  549077 main.go:141] libmachine: (ha-106302) Calling .GetConfigRaw
	I1205 19:19:29.666068  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:29.666255  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:29.666420  549077 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 19:19:29.666438  549077 main.go:141] libmachine: (ha-106302) Calling .GetState
	I1205 19:19:29.667703  549077 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 19:19:29.667716  549077 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 19:19:29.667721  549077 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 19:19:29.667726  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:29.669895  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.670221  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:29.670248  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.670353  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:29.670530  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:29.670706  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:29.670840  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:29.671003  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:19:29.671220  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:19:29.671232  549077 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 19:19:29.779777  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:19:29.779805  549077 main.go:141] libmachine: Detecting the provisioner...
	I1205 19:19:29.779833  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:29.782799  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.783132  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:29.783166  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.783331  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:29.783547  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:29.783683  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:29.783825  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:29.783999  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:19:29.784181  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:19:29.784191  549077 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 19:19:29.893268  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 19:19:29.893371  549077 main.go:141] libmachine: found compatible host: buildroot
	I1205 19:19:29.893381  549077 main.go:141] libmachine: Provisioning with buildroot...
	I1205 19:19:29.893390  549077 main.go:141] libmachine: (ha-106302) Calling .GetMachineName
	I1205 19:19:29.893630  549077 buildroot.go:166] provisioning hostname "ha-106302"
	I1205 19:19:29.893659  549077 main.go:141] libmachine: (ha-106302) Calling .GetMachineName
	I1205 19:19:29.893862  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:29.896175  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.896531  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:29.896559  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.896683  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:29.896874  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:29.897035  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:29.897188  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:29.897357  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:19:29.897522  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:19:29.897537  549077 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-106302 && echo "ha-106302" | sudo tee /etc/hostname
	I1205 19:19:30.019869  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-106302
	
	I1205 19:19:30.019903  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.022773  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.023137  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.023166  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.023330  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:30.023501  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.023684  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.023794  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:30.023973  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:19:30.024192  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:19:30.024213  549077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-106302' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-106302/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-106302' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:19:30.142377  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:19:30.142414  549077 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 19:19:30.142464  549077 buildroot.go:174] setting up certificates
	I1205 19:19:30.142480  549077 provision.go:84] configureAuth start
	I1205 19:19:30.142498  549077 main.go:141] libmachine: (ha-106302) Calling .GetMachineName
	I1205 19:19:30.142814  549077 main.go:141] libmachine: (ha-106302) Calling .GetIP
	I1205 19:19:30.145608  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.145944  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.145976  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.146132  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.148289  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.148544  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.148570  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.148679  549077 provision.go:143] copyHostCerts
	I1205 19:19:30.148727  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:19:30.148761  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 19:19:30.148778  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:19:30.148862  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 19:19:30.148936  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:19:30.148954  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 19:19:30.148960  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:19:30.148984  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 19:19:30.149037  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:19:30.149054  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 19:19:30.149058  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:19:30.149079  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 19:19:30.149123  549077 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.ha-106302 san=[127.0.0.1 192.168.39.185 ha-106302 localhost minikube]
	I1205 19:19:30.203242  549077 provision.go:177] copyRemoteCerts
	I1205 19:19:30.203307  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:19:30.203333  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.206290  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.206588  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.206621  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.206770  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:30.206956  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.207107  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:30.207262  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:19:30.291637  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 19:19:30.291726  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 19:19:30.316534  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 19:19:30.316648  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1205 19:19:30.340941  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 19:19:30.341027  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 19:19:30.365151  549077 provision.go:87] duration metric: took 222.64958ms to configureAuth
	I1205 19:19:30.365205  549077 buildroot.go:189] setting minikube options for container-runtime
	I1205 19:19:30.365380  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:19:30.365454  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.367820  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.368297  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.368331  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.368517  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:30.368750  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.368925  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.369063  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:30.369263  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:19:30.369448  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:19:30.369470  549077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:19:30.602742  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:19:30.602781  549077 main.go:141] libmachine: Checking connection to Docker...
	I1205 19:19:30.602812  549077 main.go:141] libmachine: (ha-106302) Calling .GetURL
	I1205 19:19:30.604203  549077 main.go:141] libmachine: (ha-106302) DBG | Using libvirt version 6000000
	I1205 19:19:30.606408  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.606761  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.606783  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.606936  549077 main.go:141] libmachine: Docker is up and running!
	I1205 19:19:30.606953  549077 main.go:141] libmachine: Reticulating splines...
	I1205 19:19:30.606980  549077 client.go:171] duration metric: took 24.849681626s to LocalClient.Create
	I1205 19:19:30.607004  549077 start.go:167] duration metric: took 24.849757772s to libmachine.API.Create "ha-106302"
	I1205 19:19:30.607018  549077 start.go:293] postStartSetup for "ha-106302" (driver="kvm2")
	I1205 19:19:30.607027  549077 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:19:30.607063  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:30.607325  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:19:30.607353  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.609392  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.609687  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.609717  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.609857  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:30.610024  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.610186  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:30.610314  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:19:30.696960  549077 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:19:30.708057  549077 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 19:19:30.708089  549077 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 19:19:30.708159  549077 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 19:19:30.708255  549077 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 19:19:30.708293  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /etc/ssl/certs/5381862.pem
	I1205 19:19:30.708421  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 19:19:30.723671  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:19:30.750926  549077 start.go:296] duration metric: took 143.887881ms for postStartSetup
	I1205 19:19:30.750995  549077 main.go:141] libmachine: (ha-106302) Calling .GetConfigRaw
	I1205 19:19:30.751793  549077 main.go:141] libmachine: (ha-106302) Calling .GetIP
	I1205 19:19:30.754292  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.754719  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.754767  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.755073  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:19:30.755274  549077 start.go:128] duration metric: took 25.015949989s to createHost
	I1205 19:19:30.755307  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.757830  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.758211  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.758247  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.758373  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:30.758576  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.758728  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.758849  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:30.759003  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:19:30.759199  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:19:30.759225  549077 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 19:19:30.869236  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733426370.835143064
	
	I1205 19:19:30.869266  549077 fix.go:216] guest clock: 1733426370.835143064
	I1205 19:19:30.869276  549077 fix.go:229] Guest: 2024-12-05 19:19:30.835143064 +0000 UTC Remote: 2024-12-05 19:19:30.755292155 +0000 UTC m=+25.129028552 (delta=79.850909ms)
	I1205 19:19:30.869342  549077 fix.go:200] guest clock delta is within tolerance: 79.850909ms
	I1205 19:19:30.869354  549077 start.go:83] releasing machines lock for "ha-106302", held for 25.130102669s
	I1205 19:19:30.869396  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:30.869701  549077 main.go:141] libmachine: (ha-106302) Calling .GetIP
	I1205 19:19:30.872169  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.872505  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.872550  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.872651  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:30.873195  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:30.873371  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:30.873461  549077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:19:30.873500  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.873622  549077 ssh_runner.go:195] Run: cat /version.json
	I1205 19:19:30.873648  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.876112  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.876348  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.876515  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.876544  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.876694  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:30.876787  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.876829  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.876854  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.876974  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:30.877063  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:30.877155  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.877225  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:19:30.877286  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:30.877416  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:19:30.978260  549077 ssh_runner.go:195] Run: systemctl --version
	I1205 19:19:30.984523  549077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:19:31.144577  549077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 19:19:31.150862  549077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 19:19:31.150921  549077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:19:31.168518  549077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 19:19:31.168546  549077 start.go:495] detecting cgroup driver to use...
	I1205 19:19:31.168607  549077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:19:31.184398  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:19:31.198391  549077 docker.go:217] disabling cri-docker service (if available) ...
	I1205 19:19:31.198459  549077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:19:31.212374  549077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:19:31.227092  549077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:19:31.345190  549077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:19:31.498651  549077 docker.go:233] disabling docker service ...
	I1205 19:19:31.498756  549077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:19:31.514013  549077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:19:31.527698  549077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:19:31.668291  549077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:19:31.787293  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:19:31.802121  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:19:31.821416  549077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 19:19:31.821488  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:19:31.831922  549077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:19:31.832002  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:19:31.842263  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:19:31.852580  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:19:31.863167  549077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:19:31.873525  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:19:31.883966  549077 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:19:31.901444  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:19:31.913185  549077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:19:31.922739  549077 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 19:19:31.922847  549077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 19:19:31.935394  549077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:19:31.944801  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:19:32.062619  549077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:19:32.155496  549077 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:19:32.155575  549077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:19:32.161325  549077 start.go:563] Will wait 60s for crictl version
	I1205 19:19:32.161401  549077 ssh_runner.go:195] Run: which crictl
	I1205 19:19:32.165363  549077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:19:32.206408  549077 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 19:19:32.206526  549077 ssh_runner.go:195] Run: crio --version
	I1205 19:19:32.236278  549077 ssh_runner.go:195] Run: crio --version
	I1205 19:19:32.267603  549077 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 19:19:32.269318  549077 main.go:141] libmachine: (ha-106302) Calling .GetIP
	I1205 19:19:32.272307  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:32.272654  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:32.272680  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:32.272875  549077 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 19:19:32.277254  549077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:19:32.290866  549077 kubeadm.go:883] updating cluster {Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 19:19:32.290982  549077 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:19:32.291025  549077 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:19:32.327363  549077 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 19:19:32.327433  549077 ssh_runner.go:195] Run: which lz4
	I1205 19:19:32.331533  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1205 19:19:32.331639  549077 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 19:19:32.335872  549077 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 19:19:32.335904  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 19:19:33.796243  549077 crio.go:462] duration metric: took 1.464622041s to copy over tarball
	I1205 19:19:33.796360  549077 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 19:19:35.904137  549077 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.107740538s)
	I1205 19:19:35.904177  549077 crio.go:469] duration metric: took 2.107873128s to extract the tarball
	I1205 19:19:35.904188  549077 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 19:19:35.941468  549077 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:19:35.985079  549077 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 19:19:35.985107  549077 cache_images.go:84] Images are preloaded, skipping loading
	I1205 19:19:35.985116  549077 kubeadm.go:934] updating node { 192.168.39.185 8443 v1.31.2 crio true true} ...
	I1205 19:19:35.985222  549077 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-106302 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 19:19:35.985289  549077 ssh_runner.go:195] Run: crio config
	I1205 19:19:36.034780  549077 cni.go:84] Creating CNI manager for ""
	I1205 19:19:36.034806  549077 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1205 19:19:36.034818  549077 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 19:19:36.034841  549077 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-106302 NodeName:ha-106302 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 19:19:36.035004  549077 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-106302"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.185"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 19:19:36.035032  549077 kube-vip.go:115] generating kube-vip config ...
	I1205 19:19:36.035097  549077 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 19:19:36.051693  549077 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 19:19:36.051834  549077 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1205 19:19:36.051903  549077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 19:19:36.062174  549077 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 19:19:36.062270  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1205 19:19:36.072102  549077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1205 19:19:36.089037  549077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 19:19:36.105710  549077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1205 19:19:36.122352  549077 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1205 19:19:36.139382  549077 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 19:19:36.143400  549077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:19:36.156091  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:19:36.264660  549077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:19:36.281414  549077 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302 for IP: 192.168.39.185
	I1205 19:19:36.281442  549077 certs.go:194] generating shared ca certs ...
	I1205 19:19:36.281458  549077 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:36.281638  549077 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 19:19:36.281689  549077 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 19:19:36.281704  549077 certs.go:256] generating profile certs ...
	I1205 19:19:36.281767  549077 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key
	I1205 19:19:36.281786  549077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.crt with IP's: []
	I1205 19:19:36.500418  549077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.crt ...
	I1205 19:19:36.500457  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.crt: {Name:mkb14e7bfcf7e74b43ed78fd0539344fe783f416 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:36.500681  549077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key ...
	I1205 19:19:36.500700  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key: {Name:mk7e0330a0f2228d88e0f9d58264fe1f08349563 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:36.500831  549077 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.ab85f0da
	I1205 19:19:36.500858  549077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.ab85f0da with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.185 192.168.39.254]
	I1205 19:19:36.595145  549077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.ab85f0da ...
	I1205 19:19:36.595178  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.ab85f0da: {Name:mk6fe31beb668f4be09d7ef716f12b627681f889 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:36.595356  549077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.ab85f0da ...
	I1205 19:19:36.595368  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.ab85f0da: {Name:mkb2102bd03507fee93efd6f4ad4d01650f6960d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:36.595451  549077 certs.go:381] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.ab85f0da -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt
	I1205 19:19:36.595530  549077 certs.go:385] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.ab85f0da -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key
	I1205 19:19:36.595588  549077 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key
	I1205 19:19:36.595600  549077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt with IP's: []
	I1205 19:19:36.750498  549077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt ...
	I1205 19:19:36.750528  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt: {Name:mk310719ddd3b7c13526e0d5963ab5146ba62c75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:36.750689  549077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key ...
	I1205 19:19:36.750700  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key: {Name:mka21d6cd95f23029a85e314b05925420c5b8d35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:36.750768  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 19:19:36.750785  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 19:19:36.750796  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 19:19:36.750809  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 19:19:36.750819  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 19:19:36.750831  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 19:19:36.750841  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 19:19:36.750856  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 19:19:36.750907  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 19:19:36.750946  549077 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 19:19:36.750968  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 19:19:36.750995  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 19:19:36.751018  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:19:36.751046  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 19:19:36.751085  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:19:36.751157  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem -> /usr/share/ca-certificates/538186.pem
	I1205 19:19:36.751182  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /usr/share/ca-certificates/5381862.pem
	I1205 19:19:36.751197  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:19:36.751757  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:19:36.777283  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 19:19:36.800796  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:19:36.824188  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 19:19:36.847922  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 19:19:36.871853  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 19:19:36.897433  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:19:36.923449  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 19:19:36.949838  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 19:19:36.975187  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 19:19:36.999764  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:19:37.024507  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 19:19:37.044052  549077 ssh_runner.go:195] Run: openssl version
	I1205 19:19:37.052297  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 19:19:37.068345  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 19:19:37.073536  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 19:19:37.073603  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 19:19:37.080035  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 19:19:37.091136  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 19:19:37.115623  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 19:19:37.120621  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 19:19:37.120687  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 19:19:37.126618  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 19:19:37.138669  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:19:37.150853  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:19:37.155803  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:19:37.155881  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:19:37.162049  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:19:37.174819  549077 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 19:19:37.179494  549077 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 19:19:37.179570  549077 kubeadm.go:392] StartCluster: {Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:19:37.179688  549077 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 19:19:37.179745  549077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 19:19:37.223116  549077 cri.go:89] found id: ""
	I1205 19:19:37.223191  549077 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 19:19:37.234706  549077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 19:19:37.247347  549077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 19:19:37.259258  549077 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 19:19:37.259287  549077 kubeadm.go:157] found existing configuration files:
	
	I1205 19:19:37.259336  549077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 19:19:37.269699  549077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 19:19:37.269766  549077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 19:19:37.280566  549077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 19:19:37.290999  549077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 19:19:37.291070  549077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 19:19:37.302967  549077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 19:19:37.313065  549077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 19:19:37.313160  549077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 19:19:37.323523  549077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 19:19:37.333224  549077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 19:19:37.333286  549077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 19:19:37.343725  549077 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 19:19:37.465425  549077 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 19:19:37.465503  549077 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 19:19:37.563680  549077 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 19:19:37.563837  549077 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 19:19:37.563944  549077 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 19:19:37.577125  549077 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 19:19:37.767794  549077 out.go:235]   - Generating certificates and keys ...
	I1205 19:19:37.767998  549077 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 19:19:37.768133  549077 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 19:19:37.768233  549077 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 19:19:37.823275  549077 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1205 19:19:38.256538  549077 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1205 19:19:38.418481  549077 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1205 19:19:38.506453  549077 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1205 19:19:38.506612  549077 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-106302 localhost] and IPs [192.168.39.185 127.0.0.1 ::1]
	I1205 19:19:38.599268  549077 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1205 19:19:38.599504  549077 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-106302 localhost] and IPs [192.168.39.185 127.0.0.1 ::1]
	I1205 19:19:38.721006  549077 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 19:19:38.801347  549077 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 19:19:39.020781  549077 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1205 19:19:39.020849  549077 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 19:19:39.351214  549077 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 19:19:39.652426  549077 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 19:19:39.852747  549077 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 19:19:39.949305  549077 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 19:19:40.093193  549077 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 19:19:40.093754  549077 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 19:19:40.099424  549077 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 19:19:40.101578  549077 out.go:235]   - Booting up control plane ...
	I1205 19:19:40.101681  549077 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 19:19:40.101747  549077 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 19:19:40.101808  549077 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 19:19:40.118245  549077 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 19:19:40.124419  549077 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 19:19:40.124472  549077 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 19:19:40.264350  549077 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 19:19:40.264527  549077 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 19:19:40.767072  549077 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.104658ms
	I1205 19:19:40.767195  549077 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 19:19:46.889839  549077 kubeadm.go:310] [api-check] The API server is healthy after 6.126522028s
	I1205 19:19:46.903949  549077 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 19:19:46.920566  549077 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 19:19:46.959559  549077 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 19:19:46.959762  549077 kubeadm.go:310] [mark-control-plane] Marking the node ha-106302 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 19:19:46.972882  549077 kubeadm.go:310] [bootstrap-token] Using token: hftusq.bke4u9rqswjxk9ui
	I1205 19:19:46.974672  549077 out.go:235]   - Configuring RBAC rules ...
	I1205 19:19:46.974836  549077 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 19:19:46.983462  549077 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 19:19:46.993184  549077 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 19:19:47.001254  549077 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 19:19:47.006556  549077 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 19:19:47.012815  549077 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 19:19:47.297618  549077 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 19:19:47.737983  549077 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 19:19:48.297207  549077 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 19:19:48.298256  549077 kubeadm.go:310] 
	I1205 19:19:48.298332  549077 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 19:19:48.298344  549077 kubeadm.go:310] 
	I1205 19:19:48.298499  549077 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 19:19:48.298523  549077 kubeadm.go:310] 
	I1205 19:19:48.298551  549077 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 19:19:48.298654  549077 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 19:19:48.298730  549077 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 19:19:48.298740  549077 kubeadm.go:310] 
	I1205 19:19:48.298818  549077 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 19:19:48.298835  549077 kubeadm.go:310] 
	I1205 19:19:48.298894  549077 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 19:19:48.298903  549077 kubeadm.go:310] 
	I1205 19:19:48.298967  549077 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 19:19:48.299056  549077 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 19:19:48.299139  549077 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 19:19:48.299148  549077 kubeadm.go:310] 
	I1205 19:19:48.299267  549077 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 19:19:48.299368  549077 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 19:19:48.299380  549077 kubeadm.go:310] 
	I1205 19:19:48.299496  549077 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hftusq.bke4u9rqswjxk9ui \
	I1205 19:19:48.299623  549077 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 \
	I1205 19:19:48.299658  549077 kubeadm.go:310] 	--control-plane 
	I1205 19:19:48.299667  549077 kubeadm.go:310] 
	I1205 19:19:48.299787  549077 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 19:19:48.299797  549077 kubeadm.go:310] 
	I1205 19:19:48.299896  549077 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hftusq.bke4u9rqswjxk9ui \
	I1205 19:19:48.300017  549077 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 
	I1205 19:19:48.300978  549077 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 19:19:48.301019  549077 cni.go:84] Creating CNI manager for ""
	I1205 19:19:48.301039  549077 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1205 19:19:48.302992  549077 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1205 19:19:48.304422  549077 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 19:19:48.310158  549077 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1205 19:19:48.310179  549077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1205 19:19:48.330305  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 19:19:48.708578  549077 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 19:19:48.708692  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:48.708697  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-106302 minikube.k8s.io/updated_at=2024_12_05T19_19_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=ha-106302 minikube.k8s.io/primary=true
	I1205 19:19:48.766673  549077 ops.go:34] apiserver oom_adj: -16
	I1205 19:19:48.946725  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:49.447511  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:49.947827  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:50.447219  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:50.947321  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:51.447070  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:51.946846  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:52.030950  549077 kubeadm.go:1113] duration metric: took 3.322332375s to wait for elevateKubeSystemPrivileges
	I1205 19:19:52.030984  549077 kubeadm.go:394] duration metric: took 14.851420641s to StartCluster
	I1205 19:19:52.031005  549077 settings.go:142] acquiring lock: {Name:mk53b9e6d652790a330d8f10370186624dd74692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:52.031096  549077 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:19:52.032088  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:52.032382  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 19:19:52.032390  549077 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:19:52.032418  549077 start.go:241] waiting for startup goroutines ...
	I1205 19:19:52.032436  549077 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 19:19:52.032529  549077 addons.go:69] Setting storage-provisioner=true in profile "ha-106302"
	I1205 19:19:52.032562  549077 addons.go:234] Setting addon storage-provisioner=true in "ha-106302"
	I1205 19:19:52.032575  549077 addons.go:69] Setting default-storageclass=true in profile "ha-106302"
	I1205 19:19:52.032596  549077 host.go:66] Checking if "ha-106302" exists ...
	I1205 19:19:52.032603  549077 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-106302"
	I1205 19:19:52.032616  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:19:52.032974  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:19:52.033012  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:19:52.033080  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:19:52.033128  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:19:52.048867  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37355
	I1205 19:19:52.048932  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39985
	I1205 19:19:52.049474  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:19:52.049598  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:19:52.050083  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:19:52.050108  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:19:52.050196  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:19:52.050217  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:19:52.050494  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:19:52.050547  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:19:52.050740  549077 main.go:141] libmachine: (ha-106302) Calling .GetState
	I1205 19:19:52.051108  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:19:52.051156  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:19:52.053000  549077 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:19:52.053380  549077 kapi.go:59] client config for ha-106302: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.crt", KeyFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key", CAFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 19:19:52.053986  549077 cert_rotation.go:140] Starting client certificate rotation controller
	I1205 19:19:52.054434  549077 addons.go:234] Setting addon default-storageclass=true in "ha-106302"
	I1205 19:19:52.054485  549077 host.go:66] Checking if "ha-106302" exists ...
	I1205 19:19:52.054871  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:19:52.054924  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:19:52.068403  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42463
	I1205 19:19:52.069056  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:19:52.069816  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:19:52.069851  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:19:52.070279  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:19:52.070500  549077 main.go:141] libmachine: (ha-106302) Calling .GetState
	I1205 19:19:52.071258  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35709
	I1205 19:19:52.071775  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:19:52.072386  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:19:52.072414  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:19:52.072576  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:52.072784  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:19:52.073435  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:19:52.073491  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:19:52.074239  549077 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 19:19:52.075532  549077 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:19:52.075550  549077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 19:19:52.075581  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:52.079231  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:52.079693  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:52.079729  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:52.080048  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:52.080297  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:52.080464  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:52.080625  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:19:52.090582  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41111
	I1205 19:19:52.091077  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:19:52.091649  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:19:52.091690  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:19:52.092023  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:19:52.092235  549077 main.go:141] libmachine: (ha-106302) Calling .GetState
	I1205 19:19:52.093928  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:52.094164  549077 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 19:19:52.094184  549077 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 19:19:52.094204  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:52.097425  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:52.097952  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:52.097988  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:52.098172  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:52.098357  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:52.098547  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:52.098690  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:19:52.240649  549077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:19:52.260476  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 19:19:52.326335  549077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 19:19:53.107266  549077 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1205 19:19:53.107380  549077 main.go:141] libmachine: Making call to close driver server
	I1205 19:19:53.107404  549077 main.go:141] libmachine: Making call to close driver server
	I1205 19:19:53.107428  549077 main.go:141] libmachine: (ha-106302) Calling .Close
	I1205 19:19:53.107411  549077 main.go:141] libmachine: (ha-106302) Calling .Close
	I1205 19:19:53.107855  549077 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:19:53.107863  549077 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:19:53.107872  549077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:19:53.107875  549077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:19:53.107881  549077 main.go:141] libmachine: Making call to close driver server
	I1205 19:19:53.107889  549077 main.go:141] libmachine: (ha-106302) Calling .Close
	I1205 19:19:53.107898  549077 main.go:141] libmachine: Making call to close driver server
	I1205 19:19:53.107909  549077 main.go:141] libmachine: (ha-106302) Calling .Close
	I1205 19:19:53.108388  549077 main.go:141] libmachine: (ha-106302) DBG | Closing plugin on server side
	I1205 19:19:53.108430  549077 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:19:53.108447  549077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:19:53.108523  549077 main.go:141] libmachine: (ha-106302) DBG | Closing plugin on server side
	I1205 19:19:53.108536  549077 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 19:19:53.108552  549077 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 19:19:53.108666  549077 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1205 19:19:53.108672  549077 round_trippers.go:469] Request Headers:
	I1205 19:19:53.108683  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:19:53.108690  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:19:53.108977  549077 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:19:53.109004  549077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:19:53.122784  549077 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1205 19:19:53.123463  549077 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1205 19:19:53.123481  549077 round_trippers.go:469] Request Headers:
	I1205 19:19:53.123489  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:19:53.123494  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:19:53.123497  549077 round_trippers.go:473]     Content-Type: application/json
	I1205 19:19:53.127870  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:19:53.128387  549077 main.go:141] libmachine: Making call to close driver server
	I1205 19:19:53.128421  549077 main.go:141] libmachine: (ha-106302) Calling .Close
	I1205 19:19:53.128753  549077 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:19:53.128782  549077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:19:53.130618  549077 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1205 19:19:53.131922  549077 addons.go:510] duration metric: took 1.09949066s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1205 19:19:53.131966  549077 start.go:246] waiting for cluster config update ...
	I1205 19:19:53.131976  549077 start.go:255] writing updated cluster config ...
	I1205 19:19:53.133784  549077 out.go:201] 
	I1205 19:19:53.135291  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:19:53.135384  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:19:53.137100  549077 out.go:177] * Starting "ha-106302-m02" control-plane node in "ha-106302" cluster
	I1205 19:19:53.138489  549077 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:19:53.138517  549077 cache.go:56] Caching tarball of preloaded images
	I1205 19:19:53.138635  549077 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 19:19:53.138649  549077 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 19:19:53.138720  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:19:53.138982  549077 start.go:360] acquireMachinesLock for ha-106302-m02: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 19:19:53.139025  549077 start.go:364] duration metric: took 23.765µs to acquireMachinesLock for "ha-106302-m02"
	I1205 19:19:53.139048  549077 start.go:93] Provisioning new machine with config: &{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:19:53.139118  549077 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1205 19:19:53.140509  549077 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 19:19:53.140599  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:19:53.140636  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:19:53.156622  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38951
	I1205 19:19:53.157158  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:19:53.157623  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:19:53.157649  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:19:53.157947  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:19:53.158168  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetMachineName
	I1205 19:19:53.158323  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:19:53.158520  549077 start.go:159] libmachine.API.Create for "ha-106302" (driver="kvm2")
	I1205 19:19:53.158562  549077 client.go:168] LocalClient.Create starting
	I1205 19:19:53.158607  549077 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem
	I1205 19:19:53.158656  549077 main.go:141] libmachine: Decoding PEM data...
	I1205 19:19:53.158704  549077 main.go:141] libmachine: Parsing certificate...
	I1205 19:19:53.158778  549077 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem
	I1205 19:19:53.158809  549077 main.go:141] libmachine: Decoding PEM data...
	I1205 19:19:53.158825  549077 main.go:141] libmachine: Parsing certificate...
	I1205 19:19:53.158852  549077 main.go:141] libmachine: Running pre-create checks...
	I1205 19:19:53.158863  549077 main.go:141] libmachine: (ha-106302-m02) Calling .PreCreateCheck
	I1205 19:19:53.159044  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetConfigRaw
	I1205 19:19:53.159562  549077 main.go:141] libmachine: Creating machine...
	I1205 19:19:53.159580  549077 main.go:141] libmachine: (ha-106302-m02) Calling .Create
	I1205 19:19:53.159720  549077 main.go:141] libmachine: (ha-106302-m02) Creating KVM machine...
	I1205 19:19:53.161306  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found existing default KVM network
	I1205 19:19:53.161451  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found existing private KVM network mk-ha-106302
	I1205 19:19:53.161677  549077 main.go:141] libmachine: (ha-106302-m02) Setting up store path in /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02 ...
	I1205 19:19:53.161706  549077 main.go:141] libmachine: (ha-106302-m02) Building disk image from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 19:19:53.161792  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:53.161686  549462 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:19:53.161946  549077 main.go:141] libmachine: (ha-106302-m02) Downloading /home/jenkins/minikube-integration/20052-530897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 19:19:53.454907  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:53.454778  549462 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa...
	I1205 19:19:53.629727  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:53.629571  549462 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/ha-106302-m02.rawdisk...
	I1205 19:19:53.629774  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Writing magic tar header
	I1205 19:19:53.629794  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Writing SSH key tar header
	I1205 19:19:53.629802  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:53.629693  549462 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02 ...
	I1205 19:19:53.629813  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02
	I1205 19:19:53.629877  549077 main.go:141] libmachine: (ha-106302-m02) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02 (perms=drwx------)
	I1205 19:19:53.629901  549077 main.go:141] libmachine: (ha-106302-m02) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines (perms=drwxr-xr-x)
	I1205 19:19:53.629937  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines
	I1205 19:19:53.629971  549077 main.go:141] libmachine: (ha-106302-m02) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube (perms=drwxr-xr-x)
	I1205 19:19:53.629982  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:19:53.629997  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897
	I1205 19:19:53.630005  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 19:19:53.630016  549077 main.go:141] libmachine: (ha-106302-m02) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897 (perms=drwxrwxr-x)
	I1205 19:19:53.630032  549077 main.go:141] libmachine: (ha-106302-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 19:19:53.630058  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Checking permissions on dir: /home/jenkins
	I1205 19:19:53.630069  549077 main.go:141] libmachine: (ha-106302-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 19:19:53.630084  549077 main.go:141] libmachine: (ha-106302-m02) Creating domain...
	I1205 19:19:53.630098  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Checking permissions on dir: /home
	I1205 19:19:53.630111  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Skipping /home - not owner
	I1205 19:19:53.630931  549077 main.go:141] libmachine: (ha-106302-m02) define libvirt domain using xml: 
	I1205 19:19:53.630951  549077 main.go:141] libmachine: (ha-106302-m02) <domain type='kvm'>
	I1205 19:19:53.630961  549077 main.go:141] libmachine: (ha-106302-m02)   <name>ha-106302-m02</name>
	I1205 19:19:53.630968  549077 main.go:141] libmachine: (ha-106302-m02)   <memory unit='MiB'>2200</memory>
	I1205 19:19:53.630977  549077 main.go:141] libmachine: (ha-106302-m02)   <vcpu>2</vcpu>
	I1205 19:19:53.630984  549077 main.go:141] libmachine: (ha-106302-m02)   <features>
	I1205 19:19:53.630994  549077 main.go:141] libmachine: (ha-106302-m02)     <acpi/>
	I1205 19:19:53.630998  549077 main.go:141] libmachine: (ha-106302-m02)     <apic/>
	I1205 19:19:53.631006  549077 main.go:141] libmachine: (ha-106302-m02)     <pae/>
	I1205 19:19:53.631010  549077 main.go:141] libmachine: (ha-106302-m02)     
	I1205 19:19:53.631018  549077 main.go:141] libmachine: (ha-106302-m02)   </features>
	I1205 19:19:53.631023  549077 main.go:141] libmachine: (ha-106302-m02)   <cpu mode='host-passthrough'>
	I1205 19:19:53.631031  549077 main.go:141] libmachine: (ha-106302-m02)   
	I1205 19:19:53.631048  549077 main.go:141] libmachine: (ha-106302-m02)   </cpu>
	I1205 19:19:53.631078  549077 main.go:141] libmachine: (ha-106302-m02)   <os>
	I1205 19:19:53.631098  549077 main.go:141] libmachine: (ha-106302-m02)     <type>hvm</type>
	I1205 19:19:53.631107  549077 main.go:141] libmachine: (ha-106302-m02)     <boot dev='cdrom'/>
	I1205 19:19:53.631116  549077 main.go:141] libmachine: (ha-106302-m02)     <boot dev='hd'/>
	I1205 19:19:53.631124  549077 main.go:141] libmachine: (ha-106302-m02)     <bootmenu enable='no'/>
	I1205 19:19:53.631134  549077 main.go:141] libmachine: (ha-106302-m02)   </os>
	I1205 19:19:53.631143  549077 main.go:141] libmachine: (ha-106302-m02)   <devices>
	I1205 19:19:53.631154  549077 main.go:141] libmachine: (ha-106302-m02)     <disk type='file' device='cdrom'>
	I1205 19:19:53.631183  549077 main.go:141] libmachine: (ha-106302-m02)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/boot2docker.iso'/>
	I1205 19:19:53.631194  549077 main.go:141] libmachine: (ha-106302-m02)       <target dev='hdc' bus='scsi'/>
	I1205 19:19:53.631203  549077 main.go:141] libmachine: (ha-106302-m02)       <readonly/>
	I1205 19:19:53.631212  549077 main.go:141] libmachine: (ha-106302-m02)     </disk>
	I1205 19:19:53.631221  549077 main.go:141] libmachine: (ha-106302-m02)     <disk type='file' device='disk'>
	I1205 19:19:53.631237  549077 main.go:141] libmachine: (ha-106302-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 19:19:53.631252  549077 main.go:141] libmachine: (ha-106302-m02)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/ha-106302-m02.rawdisk'/>
	I1205 19:19:53.631263  549077 main.go:141] libmachine: (ha-106302-m02)       <target dev='hda' bus='virtio'/>
	I1205 19:19:53.631274  549077 main.go:141] libmachine: (ha-106302-m02)     </disk>
	I1205 19:19:53.631284  549077 main.go:141] libmachine: (ha-106302-m02)     <interface type='network'>
	I1205 19:19:53.631293  549077 main.go:141] libmachine: (ha-106302-m02)       <source network='mk-ha-106302'/>
	I1205 19:19:53.631316  549077 main.go:141] libmachine: (ha-106302-m02)       <model type='virtio'/>
	I1205 19:19:53.631331  549077 main.go:141] libmachine: (ha-106302-m02)     </interface>
	I1205 19:19:53.631344  549077 main.go:141] libmachine: (ha-106302-m02)     <interface type='network'>
	I1205 19:19:53.631354  549077 main.go:141] libmachine: (ha-106302-m02)       <source network='default'/>
	I1205 19:19:53.631367  549077 main.go:141] libmachine: (ha-106302-m02)       <model type='virtio'/>
	I1205 19:19:53.631376  549077 main.go:141] libmachine: (ha-106302-m02)     </interface>
	I1205 19:19:53.631384  549077 main.go:141] libmachine: (ha-106302-m02)     <serial type='pty'>
	I1205 19:19:53.631393  549077 main.go:141] libmachine: (ha-106302-m02)       <target port='0'/>
	I1205 19:19:53.631401  549077 main.go:141] libmachine: (ha-106302-m02)     </serial>
	I1205 19:19:53.631415  549077 main.go:141] libmachine: (ha-106302-m02)     <console type='pty'>
	I1205 19:19:53.631426  549077 main.go:141] libmachine: (ha-106302-m02)       <target type='serial' port='0'/>
	I1205 19:19:53.631434  549077 main.go:141] libmachine: (ha-106302-m02)     </console>
	I1205 19:19:53.631446  549077 main.go:141] libmachine: (ha-106302-m02)     <rng model='virtio'>
	I1205 19:19:53.631457  549077 main.go:141] libmachine: (ha-106302-m02)       <backend model='random'>/dev/random</backend>
	I1205 19:19:53.631468  549077 main.go:141] libmachine: (ha-106302-m02)     </rng>
	I1205 19:19:53.631474  549077 main.go:141] libmachine: (ha-106302-m02)     
	I1205 19:19:53.631496  549077 main.go:141] libmachine: (ha-106302-m02)     
	I1205 19:19:53.631509  549077 main.go:141] libmachine: (ha-106302-m02)   </devices>
	I1205 19:19:53.631522  549077 main.go:141] libmachine: (ha-106302-m02) </domain>
	I1205 19:19:53.631527  549077 main.go:141] libmachine: (ha-106302-m02) 
	I1205 19:19:53.638274  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:3d:5d:13 in network default
	I1205 19:19:53.638929  549077 main.go:141] libmachine: (ha-106302-m02) Ensuring networks are active...
	I1205 19:19:53.638948  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:53.639739  549077 main.go:141] libmachine: (ha-106302-m02) Ensuring network default is active
	I1205 19:19:53.639999  549077 main.go:141] libmachine: (ha-106302-m02) Ensuring network mk-ha-106302 is active
	I1205 19:19:53.640360  549077 main.go:141] libmachine: (ha-106302-m02) Getting domain xml...
	I1205 19:19:53.640970  549077 main.go:141] libmachine: (ha-106302-m02) Creating domain...
	I1205 19:19:54.858939  549077 main.go:141] libmachine: (ha-106302-m02) Waiting to get IP...
	I1205 19:19:54.859905  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:54.860367  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:54.860447  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:54.860358  549462 retry.go:31] will retry after 210.406566ms: waiting for machine to come up
	I1205 19:19:55.072865  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:55.073270  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:55.073303  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:55.073236  549462 retry.go:31] will retry after 380.564554ms: waiting for machine to come up
	I1205 19:19:55.456055  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:55.456633  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:55.456664  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:55.456575  549462 retry.go:31] will retry after 318.906554ms: waiting for machine to come up
	I1205 19:19:55.777216  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:55.777679  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:55.777710  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:55.777619  549462 retry.go:31] will retry after 557.622429ms: waiting for machine to come up
	I1205 19:19:56.337019  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:56.337517  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:56.337547  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:56.337452  549462 retry.go:31] will retry after 733.803738ms: waiting for machine to come up
	I1205 19:19:57.072993  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:57.073519  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:57.073554  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:57.073464  549462 retry.go:31] will retry after 792.053725ms: waiting for machine to come up
	I1205 19:19:57.866686  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:57.867255  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:57.867284  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:57.867204  549462 retry.go:31] will retry after 899.083916ms: waiting for machine to come up
	I1205 19:19:58.767474  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:58.767846  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:58.767879  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:58.767799  549462 retry.go:31] will retry after 894.520794ms: waiting for machine to come up
	I1205 19:19:59.663948  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:59.664483  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:59.664517  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:59.664431  549462 retry.go:31] will retry after 1.445971502s: waiting for machine to come up
	I1205 19:20:01.112081  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:01.112472  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:20:01.112497  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:20:01.112419  549462 retry.go:31] will retry after 2.114052847s: waiting for machine to come up
	I1205 19:20:03.228602  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:03.229091  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:20:03.229116  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:20:03.229037  549462 retry.go:31] will retry after 2.786335133s: waiting for machine to come up
	I1205 19:20:06.019023  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:06.019472  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:20:06.019494  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:20:06.019436  549462 retry.go:31] will retry after 3.312152878s: waiting for machine to come up
	I1205 19:20:09.332971  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:09.333454  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:20:09.333485  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:20:09.333375  549462 retry.go:31] will retry after 4.193621264s: waiting for machine to come up
	I1205 19:20:13.528190  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:13.528561  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:20:13.528582  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:20:13.528513  549462 retry.go:31] will retry after 5.505002432s: waiting for machine to come up
	I1205 19:20:19.035383  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:19.035839  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has current primary IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:19.035869  549077 main.go:141] libmachine: (ha-106302-m02) Found IP for machine: 192.168.39.22
	I1205 19:20:19.035884  549077 main.go:141] libmachine: (ha-106302-m02) Reserving static IP address...
	I1205 19:20:19.036316  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find host DHCP lease matching {name: "ha-106302-m02", mac: "52:54:00:50:91:17", ip: "192.168.39.22"} in network mk-ha-106302
	I1205 19:20:19.111128  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Getting to WaitForSSH function...
	I1205 19:20:19.111162  549077 main.go:141] libmachine: (ha-106302-m02) Reserved static IP address: 192.168.39.22
	I1205 19:20:19.111175  549077 main.go:141] libmachine: (ha-106302-m02) Waiting for SSH to be available...
	I1205 19:20:19.113732  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:19.114085  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302
	I1205 19:20:19.114114  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find defined IP address of network mk-ha-106302 interface with MAC address 52:54:00:50:91:17
	I1205 19:20:19.114257  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Using SSH client type: external
	I1205 19:20:19.114278  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa (-rw-------)
	I1205 19:20:19.114319  549077 main.go:141] libmachine: (ha-106302-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 19:20:19.114332  549077 main.go:141] libmachine: (ha-106302-m02) DBG | About to run SSH command:
	I1205 19:20:19.114349  549077 main.go:141] libmachine: (ha-106302-m02) DBG | exit 0
	I1205 19:20:19.118035  549077 main.go:141] libmachine: (ha-106302-m02) DBG | SSH cmd err, output: exit status 255: 
	I1205 19:20:19.118057  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1205 19:20:19.118065  549077 main.go:141] libmachine: (ha-106302-m02) DBG | command : exit 0
	I1205 19:20:19.118070  549077 main.go:141] libmachine: (ha-106302-m02) DBG | err     : exit status 255
	I1205 19:20:19.118077  549077 main.go:141] libmachine: (ha-106302-m02) DBG | output  : 
	I1205 19:20:22.120219  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Getting to WaitForSSH function...
	I1205 19:20:22.122541  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.122838  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.122871  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.122905  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Using SSH client type: external
	I1205 19:20:22.122934  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa (-rw-------)
	I1205 19:20:22.122975  549077 main.go:141] libmachine: (ha-106302-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.22 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 19:20:22.122988  549077 main.go:141] libmachine: (ha-106302-m02) DBG | About to run SSH command:
	I1205 19:20:22.122997  549077 main.go:141] libmachine: (ha-106302-m02) DBG | exit 0
	I1205 19:20:22.248910  549077 main.go:141] libmachine: (ha-106302-m02) DBG | SSH cmd err, output: <nil>: 
	I1205 19:20:22.249203  549077 main.go:141] libmachine: (ha-106302-m02) KVM machine creation complete!
	I1205 19:20:22.249549  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetConfigRaw
	I1205 19:20:22.250245  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:20:22.250531  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:20:22.250724  549077 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 19:20:22.250739  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetState
	I1205 19:20:22.252145  549077 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 19:20:22.252159  549077 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 19:20:22.252171  549077 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 19:20:22.252176  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:22.255218  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.255608  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.255639  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.255817  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:22.256017  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.256246  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.256424  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:22.256663  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:20:22.256916  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1205 19:20:22.256931  549077 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 19:20:22.368260  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:20:22.368313  549077 main.go:141] libmachine: Detecting the provisioner...
	I1205 19:20:22.368324  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:22.371040  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.371460  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.371481  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.371672  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:22.371891  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.372059  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.372173  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:22.372389  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:20:22.372564  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1205 19:20:22.372578  549077 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 19:20:22.485513  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 19:20:22.485607  549077 main.go:141] libmachine: found compatible host: buildroot
	I1205 19:20:22.485621  549077 main.go:141] libmachine: Provisioning with buildroot...
	I1205 19:20:22.485637  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetMachineName
	I1205 19:20:22.485917  549077 buildroot.go:166] provisioning hostname "ha-106302-m02"
	I1205 19:20:22.485951  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetMachineName
	I1205 19:20:22.486197  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:22.489137  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.489476  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.489498  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.489650  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:22.489844  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.489970  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.490109  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:22.490248  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:20:22.490464  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1205 19:20:22.490479  549077 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-106302-m02 && echo "ha-106302-m02" | sudo tee /etc/hostname
	I1205 19:20:22.616293  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-106302-m02
	
	I1205 19:20:22.616334  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:22.618960  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.619345  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.619376  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.619593  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:22.619776  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.619933  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.620106  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:22.620296  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:20:22.620475  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1205 19:20:22.620492  549077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-106302-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-106302-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-106302-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:20:22.738362  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:20:22.738404  549077 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 19:20:22.738463  549077 buildroot.go:174] setting up certificates
	I1205 19:20:22.738483  549077 provision.go:84] configureAuth start
	I1205 19:20:22.738504  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetMachineName
	I1205 19:20:22.738844  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetIP
	I1205 19:20:22.741581  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.741992  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.742022  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.742170  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:22.744256  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.744573  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.744600  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.744740  549077 provision.go:143] copyHostCerts
	I1205 19:20:22.744774  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:20:22.744818  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 19:20:22.744828  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:20:22.744891  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 19:20:22.744975  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:20:22.744994  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 19:20:22.745000  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:20:22.745024  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 19:20:22.745615  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:20:22.745684  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 19:20:22.745691  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:20:22.745739  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 19:20:22.745877  549077 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.ha-106302-m02 san=[127.0.0.1 192.168.39.22 ha-106302-m02 localhost minikube]
	I1205 19:20:22.796359  549077 provision.go:177] copyRemoteCerts
	I1205 19:20:22.796421  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:20:22.796448  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:22.799357  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.799732  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.799766  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.799995  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:22.800198  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.800385  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:22.800538  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa Username:docker}
	I1205 19:20:22.887828  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 19:20:22.887929  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 19:20:22.916212  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 19:20:22.916319  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 19:20:22.941232  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 19:20:22.941341  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 19:20:22.967161  549077 provision.go:87] duration metric: took 228.658819ms to configureAuth
	I1205 19:20:22.967199  549077 buildroot.go:189] setting minikube options for container-runtime
	I1205 19:20:22.967392  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:20:22.967485  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:22.970286  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.970715  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.970749  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.970939  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:22.971156  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.971320  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.971433  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:22.971580  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:20:22.971846  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1205 19:20:22.971863  549077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:20:23.207888  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:20:23.207924  549077 main.go:141] libmachine: Checking connection to Docker...
	I1205 19:20:23.207935  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetURL
	I1205 19:20:23.209276  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Using libvirt version 6000000
	I1205 19:20:23.211506  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.211907  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:23.211936  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.212208  549077 main.go:141] libmachine: Docker is up and running!
	I1205 19:20:23.212224  549077 main.go:141] libmachine: Reticulating splines...
	I1205 19:20:23.212232  549077 client.go:171] duration metric: took 30.053657655s to LocalClient.Create
	I1205 19:20:23.212256  549077 start.go:167] duration metric: took 30.053742841s to libmachine.API.Create "ha-106302"
	I1205 19:20:23.212293  549077 start.go:293] postStartSetup for "ha-106302-m02" (driver="kvm2")
	I1205 19:20:23.212310  549077 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:20:23.212333  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:20:23.212577  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:20:23.212606  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:23.215114  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.215516  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:23.215546  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.215705  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:23.215924  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:23.216106  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:23.216253  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa Username:docker}
	I1205 19:20:23.304000  549077 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:20:23.308581  549077 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 19:20:23.308614  549077 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 19:20:23.308698  549077 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 19:20:23.308795  549077 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 19:20:23.308810  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /etc/ssl/certs/5381862.pem
	I1205 19:20:23.308927  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 19:20:23.319412  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:20:23.344460  549077 start.go:296] duration metric: took 132.146002ms for postStartSetup
	I1205 19:20:23.344545  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetConfigRaw
	I1205 19:20:23.345277  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetIP
	I1205 19:20:23.348207  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.348665  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:23.348693  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.348984  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:20:23.349202  549077 start.go:128] duration metric: took 30.210071126s to createHost
	I1205 19:20:23.349267  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:23.351860  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.352216  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:23.352247  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.352437  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:23.352631  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:23.352819  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:23.352959  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:23.353129  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:20:23.353382  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1205 19:20:23.353399  549077 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 19:20:23.465312  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733426423.446273328
	
	I1205 19:20:23.465337  549077 fix.go:216] guest clock: 1733426423.446273328
	I1205 19:20:23.465346  549077 fix.go:229] Guest: 2024-12-05 19:20:23.446273328 +0000 UTC Remote: 2024-12-05 19:20:23.349227376 +0000 UTC m=+77.722963766 (delta=97.045952ms)
	I1205 19:20:23.465364  549077 fix.go:200] guest clock delta is within tolerance: 97.045952ms
	I1205 19:20:23.465370  549077 start.go:83] releasing machines lock for "ha-106302-m02", held for 30.326335436s
	I1205 19:20:23.465398  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:20:23.465708  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetIP
	I1205 19:20:23.468308  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.468731  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:23.468764  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.471281  549077 out.go:177] * Found network options:
	I1205 19:20:23.472818  549077 out.go:177]   - NO_PROXY=192.168.39.185
	W1205 19:20:23.473976  549077 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 19:20:23.474014  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:20:23.474583  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:20:23.474762  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:20:23.474896  549077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:20:23.474942  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	W1205 19:20:23.474975  549077 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 19:20:23.475049  549077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:20:23.475075  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:23.477606  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.477936  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.477969  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:23.477989  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.478113  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:23.478273  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:23.478379  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:23.478405  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.478432  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:23.478613  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:23.478614  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa Username:docker}
	I1205 19:20:23.478752  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:23.478903  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:23.479088  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa Username:docker}
	I1205 19:20:23.717492  549077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 19:20:23.724398  549077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 19:20:23.724467  549077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:20:23.742377  549077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 19:20:23.742416  549077 start.go:495] detecting cgroup driver to use...
	I1205 19:20:23.742481  549077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:20:23.759474  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:20:23.774720  549077 docker.go:217] disabling cri-docker service (if available) ...
	I1205 19:20:23.774808  549077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:20:23.790887  549077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:20:23.807005  549077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:20:23.919834  549077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:20:24.073552  549077 docker.go:233] disabling docker service ...
	I1205 19:20:24.073644  549077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:20:24.088648  549077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:20:24.103156  549077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:20:24.227966  549077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:20:24.343808  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:20:24.359016  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:20:24.378372  549077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 19:20:24.378434  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:20:24.390093  549077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:20:24.390163  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:20:24.402052  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:20:24.413868  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:20:24.425063  549077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:20:24.436756  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:20:24.448351  549077 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:20:24.466246  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:20:24.477646  549077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:20:24.487958  549077 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 19:20:24.488022  549077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 19:20:24.504864  549077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:20:24.516929  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:20:24.650055  549077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:20:24.749984  549077 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:20:24.750068  549077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:20:24.754929  549077 start.go:563] Will wait 60s for crictl version
	I1205 19:20:24.754993  549077 ssh_runner.go:195] Run: which crictl
	I1205 19:20:24.758880  549077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:20:24.803432  549077 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 19:20:24.803519  549077 ssh_runner.go:195] Run: crio --version
	I1205 19:20:24.832773  549077 ssh_runner.go:195] Run: crio --version
	I1205 19:20:24.866071  549077 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 19:20:24.867336  549077 out.go:177]   - env NO_PROXY=192.168.39.185
	I1205 19:20:24.868566  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetIP
	I1205 19:20:24.871432  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:24.871918  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:24.871951  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:24.872171  549077 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 19:20:24.876554  549077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:20:24.890047  549077 mustload.go:65] Loading cluster: ha-106302
	I1205 19:20:24.890241  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:20:24.890558  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:20:24.890603  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:20:24.905579  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32811
	I1205 19:20:24.906049  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:20:24.906603  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:20:24.906625  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:20:24.906945  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:20:24.907214  549077 main.go:141] libmachine: (ha-106302) Calling .GetState
	I1205 19:20:24.908815  549077 host.go:66] Checking if "ha-106302" exists ...
	I1205 19:20:24.909241  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:20:24.909290  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:20:24.924888  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35263
	I1205 19:20:24.925342  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:20:24.925844  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:20:24.925864  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:20:24.926328  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:20:24.926542  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:20:24.926741  549077 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302 for IP: 192.168.39.22
	I1205 19:20:24.926754  549077 certs.go:194] generating shared ca certs ...
	I1205 19:20:24.926770  549077 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:20:24.926902  549077 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 19:20:24.926939  549077 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 19:20:24.926948  549077 certs.go:256] generating profile certs ...
	I1205 19:20:24.927023  549077 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key
	I1205 19:20:24.927047  549077 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.842d328c
	I1205 19:20:24.927061  549077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.842d328c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.185 192.168.39.22 192.168.39.254]
	I1205 19:20:25.018998  549077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.842d328c ...
	I1205 19:20:25.019030  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.842d328c: {Name:mkb73e87a5bbbf4f4c79d1fb041b857c135f5f2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:20:25.019217  549077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.842d328c ...
	I1205 19:20:25.019230  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.842d328c: {Name:mk2fba0e13caab29e22d03865232eceeba478b3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:20:25.019304  549077 certs.go:381] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.842d328c -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt
	I1205 19:20:25.019444  549077 certs.go:385] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.842d328c -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key
	I1205 19:20:25.019581  549077 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key
	I1205 19:20:25.019598  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 19:20:25.019611  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 19:20:25.019630  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 19:20:25.019645  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 19:20:25.019658  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 19:20:25.019670  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 19:20:25.019681  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 19:20:25.019693  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 19:20:25.019742  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 19:20:25.019769  549077 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 19:20:25.019780  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 19:20:25.019800  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 19:20:25.019822  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:20:25.019843  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 19:20:25.019881  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:20:25.019905  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /usr/share/ca-certificates/5381862.pem
	I1205 19:20:25.019919  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:20:25.019931  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem -> /usr/share/ca-certificates/538186.pem
	I1205 19:20:25.019965  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:20:25.022938  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:20:25.023319  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:20:25.023341  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:20:25.023553  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:20:25.023832  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:20:25.024047  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:20:25.024204  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:20:25.100678  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1205 19:20:25.110731  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1205 19:20:25.125160  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1205 19:20:25.130012  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1205 19:20:25.140972  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1205 19:20:25.146148  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1205 19:20:25.157617  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1205 19:20:25.162172  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1205 19:20:25.173149  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1205 19:20:25.178465  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1205 19:20:25.189406  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1205 19:20:25.193722  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1205 19:20:25.206028  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:20:25.233287  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 19:20:25.261305  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:20:25.289482  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 19:20:25.316415  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1205 19:20:25.342226  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 19:20:25.368246  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:20:25.393426  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 19:20:25.419609  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 19:20:25.445786  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:20:25.469979  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 19:20:25.493824  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1205 19:20:25.510843  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1205 19:20:25.527645  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1205 19:20:25.545705  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1205 19:20:25.563452  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1205 19:20:25.580089  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1205 19:20:25.596848  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1205 19:20:25.613807  549077 ssh_runner.go:195] Run: openssl version
	I1205 19:20:25.619697  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 19:20:25.630983  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 19:20:25.635623  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 19:20:25.635686  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 19:20:25.641677  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 19:20:25.653239  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:20:25.664932  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:20:25.669827  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:20:25.669897  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:20:25.675619  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:20:25.687127  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 19:20:25.698338  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 19:20:25.702836  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 19:20:25.702900  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 19:20:25.708667  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 19:20:25.720085  549077 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 19:20:25.724316  549077 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 19:20:25.724377  549077 kubeadm.go:934] updating node {m02 192.168.39.22 8443 v1.31.2 crio true true} ...
	I1205 19:20:25.724468  549077 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-106302-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 19:20:25.724495  549077 kube-vip.go:115] generating kube-vip config ...
	I1205 19:20:25.724527  549077 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 19:20:25.742381  549077 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 19:20:25.742481  549077 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1205 19:20:25.742576  549077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 19:20:25.753160  549077 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1205 19:20:25.753241  549077 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1205 19:20:25.763396  549077 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1205 19:20:25.763426  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 19:20:25.763482  549077 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 19:20:25.763508  549077 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1205 19:20:25.763539  549077 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1205 19:20:25.767948  549077 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1205 19:20:25.767974  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1205 19:20:27.082938  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 19:20:27.083030  549077 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 19:20:27.089029  549077 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1205 19:20:27.089083  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1205 19:20:27.157306  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:20:27.187033  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 19:20:27.187142  549077 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 19:20:27.195317  549077 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1205 19:20:27.195366  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1205 19:20:27.686796  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1205 19:20:27.697152  549077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1205 19:20:27.715018  549077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 19:20:27.734908  549077 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1205 19:20:27.752785  549077 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 19:20:27.756906  549077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:20:27.769582  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:20:27.907328  549077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:20:27.931860  549077 host.go:66] Checking if "ha-106302" exists ...
	I1205 19:20:27.932222  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:20:27.932282  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:20:27.948463  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40951
	I1205 19:20:27.949044  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:20:27.949565  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:20:27.949592  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:20:27.949925  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:20:27.950146  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:20:27.950314  549077 start.go:317] joinCluster: &{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:20:27.950422  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1205 19:20:27.950440  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:20:27.953425  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:20:27.953881  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:20:27.953912  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:20:27.954070  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:20:27.954316  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:20:27.954453  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:20:27.954606  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:20:28.113909  549077 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:20:28.113956  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kqxul8.esbt6vl0oo3pylcw --discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-106302-m02 --control-plane --apiserver-advertise-address=192.168.39.22 --apiserver-bind-port=8443"
	I1205 19:20:49.921346  549077 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kqxul8.esbt6vl0oo3pylcw --discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-106302-m02 --control-plane --apiserver-advertise-address=192.168.39.22 --apiserver-bind-port=8443": (21.80735449s)
	I1205 19:20:49.921399  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1205 19:20:50.372592  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-106302-m02 minikube.k8s.io/updated_at=2024_12_05T19_20_50_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=ha-106302 minikube.k8s.io/primary=false
	I1205 19:20:50.546557  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-106302-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1205 19:20:50.670851  549077 start.go:319] duration metric: took 22.720530002s to joinCluster
	I1205 19:20:50.670996  549077 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:20:50.671311  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:20:50.672473  549077 out.go:177] * Verifying Kubernetes components...
	I1205 19:20:50.673807  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:20:50.984620  549077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:20:51.019677  549077 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:20:51.020052  549077 kapi.go:59] client config for ha-106302: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.crt", KeyFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key", CAFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1205 19:20:51.020153  549077 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.185:8443
	I1205 19:20:51.020526  549077 node_ready.go:35] waiting up to 6m0s for node "ha-106302-m02" to be "Ready" ...
	I1205 19:20:51.020686  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:51.020701  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:51.020713  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:51.020723  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:51.041602  549077 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I1205 19:20:51.521579  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:51.521608  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:51.521618  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:51.521624  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:51.528072  549077 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 19:20:52.021672  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:52.021725  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:52.021737  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:52.021745  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:52.033142  549077 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1205 19:20:52.521343  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:52.521374  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:52.521385  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:52.521392  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:52.538251  549077 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1205 19:20:53.021297  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:53.021332  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:53.021341  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:53.021348  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:53.024986  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:53.025544  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:20:53.521241  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:53.521267  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:53.521276  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:53.521280  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:53.524346  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:54.021533  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:54.021555  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:54.021563  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:54.021566  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:54.024867  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:54.521530  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:54.521559  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:54.521573  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:54.521579  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:54.525086  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:55.020940  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:55.020967  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:55.020978  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:55.020982  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:55.024965  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:55.521541  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:55.521567  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:55.521578  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:55.521583  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:55.524843  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:55.525513  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:20:56.021561  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:56.021592  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:56.021605  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:56.021613  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:56.032511  549077 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1205 19:20:56.521545  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:56.521569  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:56.521578  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:56.521582  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:56.525173  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:57.021393  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:57.021418  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:57.021428  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:57.021452  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:57.024653  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:57.521602  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:57.521630  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:57.521642  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:57.521648  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:57.524714  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:58.021076  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:58.021102  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:58.021111  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:58.021115  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:58.024741  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:58.025390  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:20:58.521263  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:58.521301  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:58.521311  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:58.521316  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:58.524604  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:59.021545  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:59.021570  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:59.021579  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:59.021585  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:59.025044  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:59.521104  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:59.521130  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:59.521139  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:59.521142  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:59.524601  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:00.021726  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:00.021752  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:00.021761  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:00.021765  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:00.025155  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:00.025976  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:21:00.521405  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:00.521429  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:00.521438  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:00.521443  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:00.524889  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:01.021527  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:01.021552  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:01.021564  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:01.021570  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:01.025273  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:01.521362  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:01.521386  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:01.521395  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:01.521400  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:01.525347  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:02.021591  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:02.021615  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:02.021624  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:02.021629  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:02.025220  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:02.521521  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:02.521548  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:02.521557  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:02.521562  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:02.524828  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:02.525818  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:21:03.021696  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:03.021722  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:03.021731  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:03.021735  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:03.025467  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:03.521081  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:03.521106  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:03.521115  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:03.521118  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:03.525582  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:21:04.021546  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:04.021570  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:04.021579  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:04.021583  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:04.025004  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:04.520903  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:04.520929  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:04.520937  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:04.520942  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:04.524427  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:05.021518  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:05.021545  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:05.021554  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:05.021557  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:05.025066  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:05.025792  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:21:05.520844  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:05.520870  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:05.520880  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:05.520885  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:05.524450  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:06.021705  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:06.021737  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:06.021750  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:06.021757  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:06.028871  549077 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1205 19:21:06.520789  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:06.520815  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:06.520824  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:06.520829  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:06.524081  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:07.021065  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:07.021090  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:07.021099  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:07.021104  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:07.025141  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:21:07.521099  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:07.521129  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:07.521139  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:07.521142  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:07.524645  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:07.525369  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:21:08.021173  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:08.021197  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:08.021205  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:08.021211  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:08.024992  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:08.520960  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:08.520986  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:08.520994  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:08.521000  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:08.526502  549077 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 19:21:09.021508  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:09.021532  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:09.021541  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:09.021545  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:09.024675  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:09.521594  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:09.521619  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:09.521628  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:09.521631  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:09.525284  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:09.525956  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:21:10.021222  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:10.021257  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.021266  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.021271  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.024522  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:10.025029  549077 node_ready.go:49] node "ha-106302-m02" has status "Ready":"True"
	I1205 19:21:10.025048  549077 node_ready.go:38] duration metric: took 19.004494335s for node "ha-106302-m02" to be "Ready" ...
	I1205 19:21:10.025058  549077 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:21:10.025143  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:21:10.025161  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.025168  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.025172  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.029254  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:21:10.037343  549077 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-45m77" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.037449  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-45m77
	I1205 19:21:10.037458  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.037466  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.037471  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.041083  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:10.041839  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:10.041858  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.041871  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.041877  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.045415  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:10.045998  549077 pod_ready.go:93] pod "coredns-7c65d6cfc9-45m77" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:10.046023  549077 pod_ready.go:82] duration metric: took 8.64868ms for pod "coredns-7c65d6cfc9-45m77" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.046036  549077 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sjsv2" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.046126  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sjsv2
	I1205 19:21:10.046137  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.046148  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.046157  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.048885  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:21:10.049682  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:10.049701  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.049711  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.049719  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.052106  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:21:10.052838  549077 pod_ready.go:93] pod "coredns-7c65d6cfc9-sjsv2" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:10.052859  549077 pod_ready.go:82] duration metric: took 6.814644ms for pod "coredns-7c65d6cfc9-sjsv2" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.052870  549077 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.052943  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/etcd-ha-106302
	I1205 19:21:10.052958  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.052969  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.052977  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.055429  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:21:10.056066  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:10.056082  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.056091  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.056098  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.058521  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:21:10.059123  549077 pod_ready.go:93] pod "etcd-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:10.059143  549077 pod_ready.go:82] duration metric: took 6.26496ms for pod "etcd-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.059152  549077 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.059214  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/etcd-ha-106302-m02
	I1205 19:21:10.059222  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.059229  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.059234  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.061697  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:21:10.062341  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:10.062358  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.062365  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.062369  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.064629  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:21:10.065300  549077 pod_ready.go:93] pod "etcd-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:10.065321  549077 pod_ready.go:82] duration metric: took 6.163254ms for pod "etcd-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.065335  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.221800  549077 request.go:632] Waited for 156.353212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302
	I1205 19:21:10.221879  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302
	I1205 19:21:10.221887  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.221896  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.221902  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.225800  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:10.421906  549077 request.go:632] Waited for 195.38917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:10.421986  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:10.421994  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.422009  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.422020  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.425349  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:10.426055  549077 pod_ready.go:93] pod "kube-apiserver-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:10.426080  549077 pod_ready.go:82] duration metric: took 360.734464ms for pod "kube-apiserver-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.426094  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.622166  549077 request.go:632] Waited for 195.985328ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302-m02
	I1205 19:21:10.622258  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302-m02
	I1205 19:21:10.622264  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.622274  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.622278  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.626000  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:10.822214  549077 request.go:632] Waited for 195.406875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:10.822287  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:10.822292  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.822300  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.822313  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.825573  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:10.826254  549077 pod_ready.go:93] pod "kube-apiserver-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:10.826276  549077 pod_ready.go:82] duration metric: took 400.173601ms for pod "kube-apiserver-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.826290  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:11.021260  549077 request.go:632] Waited for 194.873219ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302
	I1205 19:21:11.021346  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302
	I1205 19:21:11.021355  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:11.021363  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:11.021370  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:11.024811  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:11.221934  549077 request.go:632] Waited for 196.368194ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:11.222013  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:11.222048  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:11.222064  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:11.222069  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:11.226121  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:21:11.226777  549077 pod_ready.go:93] pod "kube-controller-manager-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:11.226804  549077 pod_ready.go:82] duration metric: took 400.496709ms for pod "kube-controller-manager-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:11.226817  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:11.421793  549077 request.go:632] Waited for 194.889039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302-m02
	I1205 19:21:11.421939  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302-m02
	I1205 19:21:11.421953  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:11.421962  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:11.421966  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:11.425791  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:11.621786  549077 request.go:632] Waited for 195.325808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:11.621884  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:11.621897  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:11.621912  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:11.621921  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:11.626156  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:21:11.626616  549077 pod_ready.go:93] pod "kube-controller-manager-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:11.626639  549077 pod_ready.go:82] duration metric: took 399.812324ms for pod "kube-controller-manager-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:11.626651  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n57lf" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:11.821729  549077 request.go:632] Waited for 194.997004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n57lf
	I1205 19:21:11.821817  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n57lf
	I1205 19:21:11.821822  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:11.821831  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:11.821838  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:11.825718  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:12.021841  549077 request.go:632] Waited for 195.410535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:12.021958  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:12.021969  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:12.021977  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:12.021984  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:12.025441  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:12.025999  549077 pod_ready.go:93] pod "kube-proxy-n57lf" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:12.026021  549077 pod_ready.go:82] duration metric: took 399.361827ms for pod "kube-proxy-n57lf" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:12.026047  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zw6nj" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:12.222118  549077 request.go:632] Waited for 195.969624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zw6nj
	I1205 19:21:12.222187  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zw6nj
	I1205 19:21:12.222192  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:12.222200  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:12.222204  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:12.225785  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:12.422070  549077 request.go:632] Waited for 195.377811ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:12.422132  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:12.422137  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:12.422145  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:12.422149  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:12.426002  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:12.426709  549077 pod_ready.go:93] pod "kube-proxy-zw6nj" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:12.426735  549077 pod_ready.go:82] duration metric: took 400.678816ms for pod "kube-proxy-zw6nj" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:12.426748  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:12.621608  549077 request.go:632] Waited for 194.758143ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302
	I1205 19:21:12.621678  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302
	I1205 19:21:12.621683  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:12.621691  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:12.621699  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:12.625056  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:12.822084  549077 request.go:632] Waited for 196.278548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:12.822154  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:12.822166  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:12.822175  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:12.822178  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:12.826187  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:12.827028  549077 pod_ready.go:93] pod "kube-scheduler-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:12.827048  549077 pod_ready.go:82] duration metric: took 400.290627ms for pod "kube-scheduler-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:12.827061  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:13.021645  549077 request.go:632] Waited for 194.500049ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302-m02
	I1205 19:21:13.021737  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302-m02
	I1205 19:21:13.021746  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:13.021787  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:13.021795  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:13.025431  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:13.221555  549077 request.go:632] Waited for 195.53176ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:13.221632  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:13.221641  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:13.221652  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:13.221657  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:13.226002  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:21:13.226628  549077 pod_ready.go:93] pod "kube-scheduler-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:13.226651  549077 pod_ready.go:82] duration metric: took 399.582286ms for pod "kube-scheduler-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:13.226663  549077 pod_ready.go:39] duration metric: took 3.201594435s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:21:13.226683  549077 api_server.go:52] waiting for apiserver process to appear ...
	I1205 19:21:13.226740  549077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 19:21:13.244668  549077 api_server.go:72] duration metric: took 22.573625009s to wait for apiserver process to appear ...
	I1205 19:21:13.244706  549077 api_server.go:88] waiting for apiserver healthz status ...
	I1205 19:21:13.244737  549077 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I1205 19:21:13.252149  549077 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I1205 19:21:13.252242  549077 round_trippers.go:463] GET https://192.168.39.185:8443/version
	I1205 19:21:13.252252  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:13.252260  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:13.252283  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:13.253152  549077 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1205 19:21:13.253251  549077 api_server.go:141] control plane version: v1.31.2
	I1205 19:21:13.253269  549077 api_server.go:131] duration metric: took 8.556554ms to wait for apiserver health ...
	I1205 19:21:13.253277  549077 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 19:21:13.421707  549077 request.go:632] Waited for 168.323563ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:21:13.421778  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:21:13.421784  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:13.421803  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:13.421808  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:13.428060  549077 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 19:21:13.433027  549077 system_pods.go:59] 17 kube-system pods found
	I1205 19:21:13.433063  549077 system_pods.go:61] "coredns-7c65d6cfc9-45m77" [88196078-5292-43dc-84b2-dc53af435e5c] Running
	I1205 19:21:13.433069  549077 system_pods.go:61] "coredns-7c65d6cfc9-sjsv2" [b686cbc5-1b4f-44ea-89cb-70063b687718] Running
	I1205 19:21:13.433073  549077 system_pods.go:61] "etcd-ha-106302" [b0c81234-5186-4812-a1a2-4f035f9efabf] Running
	I1205 19:21:13.433076  549077 system_pods.go:61] "etcd-ha-106302-m02" [8c619411-697a-4eb0-8725-27811a17aba1] Running
	I1205 19:21:13.433079  549077 system_pods.go:61] "kindnet-thcsp" [e2eec41c-3ca9-42ff-801d-dfdf05f6eab2] Running
	I1205 19:21:13.433083  549077 system_pods.go:61] "kindnet-xr9mh" [2044800c-f517-439e-810b-71a114cb044e] Running
	I1205 19:21:13.433087  549077 system_pods.go:61] "kube-apiserver-ha-106302" [688ddac9-2f42-4e6b-b9e8-a9c967a7180b] Running
	I1205 19:21:13.433090  549077 system_pods.go:61] "kube-apiserver-ha-106302-m02" [ad05d27e-72e0-443e-8ad3-2d464c116f27] Running
	I1205 19:21:13.433094  549077 system_pods.go:61] "kube-controller-manager-ha-106302" [e63c5a4d-c327-4040-b679-62b5b06abec9] Running
	I1205 19:21:13.433097  549077 system_pods.go:61] "kube-controller-manager-ha-106302-m02" [fe707148-d0c6-4de3-841f-3a8143fa9217] Running
	I1205 19:21:13.433101  549077 system_pods.go:61] "kube-proxy-n57lf" [94819792-89fc-4a70-a54f-02e594b657bf] Running
	I1205 19:21:13.433104  549077 system_pods.go:61] "kube-proxy-zw6nj" [d35e1426-9151-4eb3-95fd-c2b36c126b51] Running
	I1205 19:21:13.433107  549077 system_pods.go:61] "kube-scheduler-ha-106302" [6dd32258-0ba3-4f79-8d4b-165b918bbc36] Running
	I1205 19:21:13.433110  549077 system_pods.go:61] "kube-scheduler-ha-106302-m02" [b94b6bf9-4639-47d1-92be-0cbba44e65f3] Running
	I1205 19:21:13.433114  549077 system_pods.go:61] "kube-vip-ha-106302" [03b99453-c78d-4aaf-93e8-7011ae363db4] Running
	I1205 19:21:13.433119  549077 system_pods.go:61] "kube-vip-ha-106302-m02" [2ec94818-bc15-4d60-95b4-e7f7235f0341] Running
	I1205 19:21:13.433125  549077 system_pods.go:61] "storage-provisioner" [88d6e224-b304-4f84-a162-9803400c9acf] Running
	I1205 19:21:13.433131  549077 system_pods.go:74] duration metric: took 179.848181ms to wait for pod list to return data ...
	I1205 19:21:13.433140  549077 default_sa.go:34] waiting for default service account to be created ...
	I1205 19:21:13.621481  549077 request.go:632] Waited for 188.228658ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/default/serviceaccounts
	I1205 19:21:13.621548  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/default/serviceaccounts
	I1205 19:21:13.621554  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:13.621562  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:13.621566  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:13.625432  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:13.625697  549077 default_sa.go:45] found service account: "default"
	I1205 19:21:13.625716  549077 default_sa.go:55] duration metric: took 192.568863ms for default service account to be created ...
	I1205 19:21:13.625725  549077 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 19:21:13.821886  549077 request.go:632] Waited for 196.082261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:21:13.821977  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:21:13.821988  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:13.821997  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:13.822001  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:13.828461  549077 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 19:21:13.834834  549077 system_pods.go:86] 17 kube-system pods found
	I1205 19:21:13.834869  549077 system_pods.go:89] "coredns-7c65d6cfc9-45m77" [88196078-5292-43dc-84b2-dc53af435e5c] Running
	I1205 19:21:13.834877  549077 system_pods.go:89] "coredns-7c65d6cfc9-sjsv2" [b686cbc5-1b4f-44ea-89cb-70063b687718] Running
	I1205 19:21:13.834882  549077 system_pods.go:89] "etcd-ha-106302" [b0c81234-5186-4812-a1a2-4f035f9efabf] Running
	I1205 19:21:13.834886  549077 system_pods.go:89] "etcd-ha-106302-m02" [8c619411-697a-4eb0-8725-27811a17aba1] Running
	I1205 19:21:13.834890  549077 system_pods.go:89] "kindnet-thcsp" [e2eec41c-3ca9-42ff-801d-dfdf05f6eab2] Running
	I1205 19:21:13.834894  549077 system_pods.go:89] "kindnet-xr9mh" [2044800c-f517-439e-810b-71a114cb044e] Running
	I1205 19:21:13.834898  549077 system_pods.go:89] "kube-apiserver-ha-106302" [688ddac9-2f42-4e6b-b9e8-a9c967a7180b] Running
	I1205 19:21:13.834901  549077 system_pods.go:89] "kube-apiserver-ha-106302-m02" [ad05d27e-72e0-443e-8ad3-2d464c116f27] Running
	I1205 19:21:13.834905  549077 system_pods.go:89] "kube-controller-manager-ha-106302" [e63c5a4d-c327-4040-b679-62b5b06abec9] Running
	I1205 19:21:13.834909  549077 system_pods.go:89] "kube-controller-manager-ha-106302-m02" [fe707148-d0c6-4de3-841f-3a8143fa9217] Running
	I1205 19:21:13.834912  549077 system_pods.go:89] "kube-proxy-n57lf" [94819792-89fc-4a70-a54f-02e594b657bf] Running
	I1205 19:21:13.834915  549077 system_pods.go:89] "kube-proxy-zw6nj" [d35e1426-9151-4eb3-95fd-c2b36c126b51] Running
	I1205 19:21:13.834919  549077 system_pods.go:89] "kube-scheduler-ha-106302" [6dd32258-0ba3-4f79-8d4b-165b918bbc36] Running
	I1205 19:21:13.834924  549077 system_pods.go:89] "kube-scheduler-ha-106302-m02" [b94b6bf9-4639-47d1-92be-0cbba44e65f3] Running
	I1205 19:21:13.834928  549077 system_pods.go:89] "kube-vip-ha-106302" [03b99453-c78d-4aaf-93e8-7011ae363db4] Running
	I1205 19:21:13.834935  549077 system_pods.go:89] "kube-vip-ha-106302-m02" [2ec94818-bc15-4d60-95b4-e7f7235f0341] Running
	I1205 19:21:13.834939  549077 system_pods.go:89] "storage-provisioner" [88d6e224-b304-4f84-a162-9803400c9acf] Running
	I1205 19:21:13.834946  549077 system_pods.go:126] duration metric: took 209.215629ms to wait for k8s-apps to be running ...
	I1205 19:21:13.834957  549077 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 19:21:13.835009  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:21:13.850235  549077 system_svc.go:56] duration metric: took 15.264777ms WaitForService to wait for kubelet
	I1205 19:21:13.850283  549077 kubeadm.go:582] duration metric: took 23.179247512s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:21:13.850305  549077 node_conditions.go:102] verifying NodePressure condition ...
	I1205 19:21:14.021757  549077 request.go:632] Waited for 171.347316ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes
	I1205 19:21:14.021833  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes
	I1205 19:21:14.021840  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:14.021850  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:14.021860  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:14.026541  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:21:14.027820  549077 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 19:21:14.027846  549077 node_conditions.go:123] node cpu capacity is 2
	I1205 19:21:14.027863  549077 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 19:21:14.027868  549077 node_conditions.go:123] node cpu capacity is 2
	I1205 19:21:14.027874  549077 node_conditions.go:105] duration metric: took 177.564002ms to run NodePressure ...
	I1205 19:21:14.027887  549077 start.go:241] waiting for startup goroutines ...
	I1205 19:21:14.027919  549077 start.go:255] writing updated cluster config ...
	I1205 19:21:14.029921  549077 out.go:201] 
	I1205 19:21:14.031474  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:21:14.031571  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:21:14.033173  549077 out.go:177] * Starting "ha-106302-m03" control-plane node in "ha-106302" cluster
	I1205 19:21:14.034362  549077 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:21:14.034386  549077 cache.go:56] Caching tarball of preloaded images
	I1205 19:21:14.034498  549077 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 19:21:14.034514  549077 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 19:21:14.034605  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:21:14.034796  549077 start.go:360] acquireMachinesLock for ha-106302-m03: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 19:21:14.034842  549077 start.go:364] duration metric: took 26.337µs to acquireMachinesLock for "ha-106302-m03"
	I1205 19:21:14.034860  549077 start.go:93] Provisioning new machine with config: &{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:21:14.034960  549077 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1205 19:21:14.036589  549077 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 19:21:14.036698  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:21:14.036753  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:21:14.052449  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36769
	I1205 19:21:14.052905  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:21:14.053431  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:21:14.053458  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:21:14.053758  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:21:14.053945  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetMachineName
	I1205 19:21:14.054107  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:14.054258  549077 start.go:159] libmachine.API.Create for "ha-106302" (driver="kvm2")
	I1205 19:21:14.054297  549077 client.go:168] LocalClient.Create starting
	I1205 19:21:14.054348  549077 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem
	I1205 19:21:14.054391  549077 main.go:141] libmachine: Decoding PEM data...
	I1205 19:21:14.054413  549077 main.go:141] libmachine: Parsing certificate...
	I1205 19:21:14.054484  549077 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem
	I1205 19:21:14.054515  549077 main.go:141] libmachine: Decoding PEM data...
	I1205 19:21:14.054536  549077 main.go:141] libmachine: Parsing certificate...
	I1205 19:21:14.054563  549077 main.go:141] libmachine: Running pre-create checks...
	I1205 19:21:14.054575  549077 main.go:141] libmachine: (ha-106302-m03) Calling .PreCreateCheck
	I1205 19:21:14.054725  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetConfigRaw
	I1205 19:21:14.055103  549077 main.go:141] libmachine: Creating machine...
	I1205 19:21:14.055117  549077 main.go:141] libmachine: (ha-106302-m03) Calling .Create
	I1205 19:21:14.055267  549077 main.go:141] libmachine: (ha-106302-m03) Creating KVM machine...
	I1205 19:21:14.056572  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found existing default KVM network
	I1205 19:21:14.056653  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found existing private KVM network mk-ha-106302
	I1205 19:21:14.056780  549077 main.go:141] libmachine: (ha-106302-m03) Setting up store path in /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03 ...
	I1205 19:21:14.056804  549077 main.go:141] libmachine: (ha-106302-m03) Building disk image from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 19:21:14.056850  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:14.056773  549869 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:21:14.056935  549077 main.go:141] libmachine: (ha-106302-m03) Downloading /home/jenkins/minikube-integration/20052-530897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 19:21:14.349600  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:14.349456  549869 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/id_rsa...
	I1205 19:21:14.429525  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:14.429393  549869 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/ha-106302-m03.rawdisk...
	I1205 19:21:14.429558  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Writing magic tar header
	I1205 19:21:14.429573  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Writing SSH key tar header
	I1205 19:21:14.429586  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:14.429511  549869 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03 ...
	I1205 19:21:14.429599  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03
	I1205 19:21:14.429612  549077 main.go:141] libmachine: (ha-106302-m03) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03 (perms=drwx------)
	I1205 19:21:14.429633  549077 main.go:141] libmachine: (ha-106302-m03) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines (perms=drwxr-xr-x)
	I1205 19:21:14.429648  549077 main.go:141] libmachine: (ha-106302-m03) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube (perms=drwxr-xr-x)
	I1205 19:21:14.429664  549077 main.go:141] libmachine: (ha-106302-m03) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897 (perms=drwxrwxr-x)
	I1205 19:21:14.429734  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines
	I1205 19:21:14.429769  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:21:14.429779  549077 main.go:141] libmachine: (ha-106302-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 19:21:14.429798  549077 main.go:141] libmachine: (ha-106302-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 19:21:14.429808  549077 main.go:141] libmachine: (ha-106302-m03) Creating domain...
	I1205 19:21:14.429823  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897
	I1205 19:21:14.429833  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 19:21:14.429861  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Checking permissions on dir: /home/jenkins
	I1205 19:21:14.429878  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Checking permissions on dir: /home
	I1205 19:21:14.429910  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Skipping /home - not owner
	I1205 19:21:14.430728  549077 main.go:141] libmachine: (ha-106302-m03) define libvirt domain using xml: 
	I1205 19:21:14.430737  549077 main.go:141] libmachine: (ha-106302-m03) <domain type='kvm'>
	I1205 19:21:14.430743  549077 main.go:141] libmachine: (ha-106302-m03)   <name>ha-106302-m03</name>
	I1205 19:21:14.430748  549077 main.go:141] libmachine: (ha-106302-m03)   <memory unit='MiB'>2200</memory>
	I1205 19:21:14.430753  549077 main.go:141] libmachine: (ha-106302-m03)   <vcpu>2</vcpu>
	I1205 19:21:14.430758  549077 main.go:141] libmachine: (ha-106302-m03)   <features>
	I1205 19:21:14.430762  549077 main.go:141] libmachine: (ha-106302-m03)     <acpi/>
	I1205 19:21:14.430769  549077 main.go:141] libmachine: (ha-106302-m03)     <apic/>
	I1205 19:21:14.430774  549077 main.go:141] libmachine: (ha-106302-m03)     <pae/>
	I1205 19:21:14.430778  549077 main.go:141] libmachine: (ha-106302-m03)     
	I1205 19:21:14.430783  549077 main.go:141] libmachine: (ha-106302-m03)   </features>
	I1205 19:21:14.430790  549077 main.go:141] libmachine: (ha-106302-m03)   <cpu mode='host-passthrough'>
	I1205 19:21:14.430795  549077 main.go:141] libmachine: (ha-106302-m03)   
	I1205 19:21:14.430801  549077 main.go:141] libmachine: (ha-106302-m03)   </cpu>
	I1205 19:21:14.430806  549077 main.go:141] libmachine: (ha-106302-m03)   <os>
	I1205 19:21:14.430811  549077 main.go:141] libmachine: (ha-106302-m03)     <type>hvm</type>
	I1205 19:21:14.430816  549077 main.go:141] libmachine: (ha-106302-m03)     <boot dev='cdrom'/>
	I1205 19:21:14.430823  549077 main.go:141] libmachine: (ha-106302-m03)     <boot dev='hd'/>
	I1205 19:21:14.430849  549077 main.go:141] libmachine: (ha-106302-m03)     <bootmenu enable='no'/>
	I1205 19:21:14.430873  549077 main.go:141] libmachine: (ha-106302-m03)   </os>
	I1205 19:21:14.430884  549077 main.go:141] libmachine: (ha-106302-m03)   <devices>
	I1205 19:21:14.430900  549077 main.go:141] libmachine: (ha-106302-m03)     <disk type='file' device='cdrom'>
	I1205 19:21:14.430917  549077 main.go:141] libmachine: (ha-106302-m03)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/boot2docker.iso'/>
	I1205 19:21:14.430928  549077 main.go:141] libmachine: (ha-106302-m03)       <target dev='hdc' bus='scsi'/>
	I1205 19:21:14.430936  549077 main.go:141] libmachine: (ha-106302-m03)       <readonly/>
	I1205 19:21:14.430944  549077 main.go:141] libmachine: (ha-106302-m03)     </disk>
	I1205 19:21:14.430951  549077 main.go:141] libmachine: (ha-106302-m03)     <disk type='file' device='disk'>
	I1205 19:21:14.430963  549077 main.go:141] libmachine: (ha-106302-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 19:21:14.431003  549077 main.go:141] libmachine: (ha-106302-m03)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/ha-106302-m03.rawdisk'/>
	I1205 19:21:14.431029  549077 main.go:141] libmachine: (ha-106302-m03)       <target dev='hda' bus='virtio'/>
	I1205 19:21:14.431041  549077 main.go:141] libmachine: (ha-106302-m03)     </disk>
	I1205 19:21:14.431052  549077 main.go:141] libmachine: (ha-106302-m03)     <interface type='network'>
	I1205 19:21:14.431065  549077 main.go:141] libmachine: (ha-106302-m03)       <source network='mk-ha-106302'/>
	I1205 19:21:14.431075  549077 main.go:141] libmachine: (ha-106302-m03)       <model type='virtio'/>
	I1205 19:21:14.431084  549077 main.go:141] libmachine: (ha-106302-m03)     </interface>
	I1205 19:21:14.431096  549077 main.go:141] libmachine: (ha-106302-m03)     <interface type='network'>
	I1205 19:21:14.431107  549077 main.go:141] libmachine: (ha-106302-m03)       <source network='default'/>
	I1205 19:21:14.431122  549077 main.go:141] libmachine: (ha-106302-m03)       <model type='virtio'/>
	I1205 19:21:14.431134  549077 main.go:141] libmachine: (ha-106302-m03)     </interface>
	I1205 19:21:14.431143  549077 main.go:141] libmachine: (ha-106302-m03)     <serial type='pty'>
	I1205 19:21:14.431151  549077 main.go:141] libmachine: (ha-106302-m03)       <target port='0'/>
	I1205 19:21:14.431161  549077 main.go:141] libmachine: (ha-106302-m03)     </serial>
	I1205 19:21:14.431168  549077 main.go:141] libmachine: (ha-106302-m03)     <console type='pty'>
	I1205 19:21:14.431178  549077 main.go:141] libmachine: (ha-106302-m03)       <target type='serial' port='0'/>
	I1205 19:21:14.431186  549077 main.go:141] libmachine: (ha-106302-m03)     </console>
	I1205 19:21:14.431201  549077 main.go:141] libmachine: (ha-106302-m03)     <rng model='virtio'>
	I1205 19:21:14.431213  549077 main.go:141] libmachine: (ha-106302-m03)       <backend model='random'>/dev/random</backend>
	I1205 19:21:14.431223  549077 main.go:141] libmachine: (ha-106302-m03)     </rng>
	I1205 19:21:14.431230  549077 main.go:141] libmachine: (ha-106302-m03)     
	I1205 19:21:14.431248  549077 main.go:141] libmachine: (ha-106302-m03)     
	I1205 19:21:14.431260  549077 main.go:141] libmachine: (ha-106302-m03)   </devices>
	I1205 19:21:14.431266  549077 main.go:141] libmachine: (ha-106302-m03) </domain>
	I1205 19:21:14.431276  549077 main.go:141] libmachine: (ha-106302-m03) 
	I1205 19:21:14.438494  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:19:ce:fd in network default
	I1205 19:21:14.439230  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:14.439249  549077 main.go:141] libmachine: (ha-106302-m03) Ensuring networks are active...
	I1205 19:21:14.440093  549077 main.go:141] libmachine: (ha-106302-m03) Ensuring network default is active
	I1205 19:21:14.440381  549077 main.go:141] libmachine: (ha-106302-m03) Ensuring network mk-ha-106302 is active
	I1205 19:21:14.440705  549077 main.go:141] libmachine: (ha-106302-m03) Getting domain xml...
	I1205 19:21:14.441404  549077 main.go:141] libmachine: (ha-106302-m03) Creating domain...
	I1205 19:21:15.693271  549077 main.go:141] libmachine: (ha-106302-m03) Waiting to get IP...
	I1205 19:21:15.694143  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:15.694577  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:15.694598  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:15.694548  549869 retry.go:31] will retry after 242.776885ms: waiting for machine to come up
	I1205 19:21:15.939062  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:15.939524  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:15.939551  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:15.939479  549869 retry.go:31] will retry after 378.968491ms: waiting for machine to come up
	I1205 19:21:16.320454  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:16.320979  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:16.321027  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:16.320939  549869 retry.go:31] will retry after 344.418245ms: waiting for machine to come up
	I1205 19:21:16.667478  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:16.667854  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:16.667886  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:16.667793  549869 retry.go:31] will retry after 423.913988ms: waiting for machine to come up
	I1205 19:21:17.093467  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:17.093883  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:17.093914  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:17.093826  549869 retry.go:31] will retry after 515.714654ms: waiting for machine to come up
	I1205 19:21:17.611140  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:17.611460  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:17.611485  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:17.611417  549869 retry.go:31] will retry after 696.033751ms: waiting for machine to come up
	I1205 19:21:18.308904  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:18.309411  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:18.309441  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:18.309369  549869 retry.go:31] will retry after 785.032938ms: waiting for machine to come up
	I1205 19:21:19.095780  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:19.096341  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:19.096368  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:19.096298  549869 retry.go:31] will retry after 896.435978ms: waiting for machine to come up
	I1205 19:21:19.994107  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:19.994555  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:19.994578  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:19.994515  549869 retry.go:31] will retry after 1.855664433s: waiting for machine to come up
	I1205 19:21:21.852199  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:21.852746  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:21.852782  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:21.852681  549869 retry.go:31] will retry after 1.846119751s: waiting for machine to come up
	I1205 19:21:23.701581  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:23.702157  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:23.702188  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:23.702108  549869 retry.go:31] will retry after 2.613135019s: waiting for machine to come up
	I1205 19:21:26.317749  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:26.318296  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:26.318317  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:26.318258  549869 retry.go:31] will retry after 3.299144229s: waiting for machine to come up
	I1205 19:21:29.618947  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:29.619445  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:29.619480  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:29.619393  549869 retry.go:31] will retry after 3.447245355s: waiting for machine to come up
	I1205 19:21:33.071166  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:33.071564  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:33.071595  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:33.071509  549869 retry.go:31] will retry after 3.459206484s: waiting for machine to come up
	I1205 19:21:36.533492  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.533999  549077 main.go:141] libmachine: (ha-106302-m03) Found IP for machine: 192.168.39.151
	I1205 19:21:36.534029  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has current primary IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.534063  549077 main.go:141] libmachine: (ha-106302-m03) Reserving static IP address...
	I1205 19:21:36.534590  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find host DHCP lease matching {name: "ha-106302-m03", mac: "52:54:00:e6:65:e2", ip: "192.168.39.151"} in network mk-ha-106302
	I1205 19:21:36.616736  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Getting to WaitForSSH function...
	I1205 19:21:36.616827  549077 main.go:141] libmachine: (ha-106302-m03) Reserved static IP address: 192.168.39.151
	I1205 19:21:36.616852  549077 main.go:141] libmachine: (ha-106302-m03) Waiting for SSH to be available...
	I1205 19:21:36.619362  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.620041  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:36.620071  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.620207  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Using SSH client type: external
	I1205 19:21:36.620243  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/id_rsa (-rw-------)
	I1205 19:21:36.620289  549077 main.go:141] libmachine: (ha-106302-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 19:21:36.620307  549077 main.go:141] libmachine: (ha-106302-m03) DBG | About to run SSH command:
	I1205 19:21:36.620323  549077 main.go:141] libmachine: (ha-106302-m03) DBG | exit 0
	I1205 19:21:36.748331  549077 main.go:141] libmachine: (ha-106302-m03) DBG | SSH cmd err, output: <nil>: 
	I1205 19:21:36.748638  549077 main.go:141] libmachine: (ha-106302-m03) KVM machine creation complete!
	I1205 19:21:36.748951  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetConfigRaw
	I1205 19:21:36.749696  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:36.749899  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:36.750158  549077 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 19:21:36.750177  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetState
	I1205 19:21:36.751459  549077 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 19:21:36.751496  549077 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 19:21:36.751505  549077 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 19:21:36.751516  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:36.753721  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.754147  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:36.754180  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.754321  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:36.754488  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:36.754635  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:36.754782  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:36.754931  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:21:36.755238  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.151 22 <nil> <nil>}
	I1205 19:21:36.755253  549077 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 19:21:36.859924  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:21:36.859961  549077 main.go:141] libmachine: Detecting the provisioner...
	I1205 19:21:36.859974  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:36.864316  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.864691  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:36.864716  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.864886  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:36.865081  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:36.865227  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:36.865363  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:36.865505  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:21:36.865742  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.151 22 <nil> <nil>}
	I1205 19:21:36.865757  549077 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 19:21:36.969493  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 19:21:36.969588  549077 main.go:141] libmachine: found compatible host: buildroot
	I1205 19:21:36.969602  549077 main.go:141] libmachine: Provisioning with buildroot...
	I1205 19:21:36.969613  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetMachineName
	I1205 19:21:36.969955  549077 buildroot.go:166] provisioning hostname "ha-106302-m03"
	I1205 19:21:36.969984  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetMachineName
	I1205 19:21:36.970178  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:36.972856  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.973248  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:36.973275  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.973447  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:36.973641  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:36.973807  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:36.973971  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:36.974182  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:21:36.974409  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.151 22 <nil> <nil>}
	I1205 19:21:36.974424  549077 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-106302-m03 && echo "ha-106302-m03" | sudo tee /etc/hostname
	I1205 19:21:37.091631  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-106302-m03
	
	I1205 19:21:37.091670  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:37.095049  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.095508  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.095538  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.095711  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:37.095892  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.096106  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.096340  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:37.096575  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:21:37.096743  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.151 22 <nil> <nil>}
	I1205 19:21:37.096759  549077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-106302-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-106302-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-106302-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:21:37.210648  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:21:37.210686  549077 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 19:21:37.210703  549077 buildroot.go:174] setting up certificates
	I1205 19:21:37.210719  549077 provision.go:84] configureAuth start
	I1205 19:21:37.210728  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetMachineName
	I1205 19:21:37.211084  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetIP
	I1205 19:21:37.214307  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.214777  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.214811  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.214993  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:37.217609  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.218026  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.218059  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.218357  549077 provision.go:143] copyHostCerts
	I1205 19:21:37.218397  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:21:37.218443  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 19:21:37.218457  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:21:37.218538  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 19:21:37.218640  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:21:37.218667  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 19:21:37.218672  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:21:37.218707  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 19:21:37.218773  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:21:37.218800  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 19:21:37.218810  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:21:37.218844  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 19:21:37.218931  549077 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.ha-106302-m03 san=[127.0.0.1 192.168.39.151 ha-106302-m03 localhost minikube]
	I1205 19:21:37.343754  549077 provision.go:177] copyRemoteCerts
	I1205 19:21:37.343819  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:21:37.343847  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:37.346846  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.347219  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.347248  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.347438  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:37.347639  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.347948  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:37.348134  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/id_rsa Username:docker}
	I1205 19:21:37.432798  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 19:21:37.432880  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 19:21:37.459881  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 19:21:37.459950  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 19:21:37.486599  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 19:21:37.486685  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 19:21:37.511864  549077 provision.go:87] duration metric: took 301.129005ms to configureAuth
	I1205 19:21:37.511899  549077 buildroot.go:189] setting minikube options for container-runtime
	I1205 19:21:37.512151  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:21:37.512247  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:37.515413  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.515827  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.515873  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.516082  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:37.516362  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.516553  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.516696  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:37.516848  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:21:37.517021  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.151 22 <nil> <nil>}
	I1205 19:21:37.517041  549077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:21:37.766182  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:21:37.766214  549077 main.go:141] libmachine: Checking connection to Docker...
	I1205 19:21:37.766223  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetURL
	I1205 19:21:37.767491  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Using libvirt version 6000000
	I1205 19:21:37.770234  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.770645  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.770683  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.770820  549077 main.go:141] libmachine: Docker is up and running!
	I1205 19:21:37.770836  549077 main.go:141] libmachine: Reticulating splines...
	I1205 19:21:37.770844  549077 client.go:171] duration metric: took 23.716534789s to LocalClient.Create
	I1205 19:21:37.770869  549077 start.go:167] duration metric: took 23.716613038s to libmachine.API.Create "ha-106302"
	I1205 19:21:37.770879  549077 start.go:293] postStartSetup for "ha-106302-m03" (driver="kvm2")
	I1205 19:21:37.770890  549077 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:21:37.770909  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:37.771260  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:21:37.771293  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:37.773751  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.774322  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.774351  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.774623  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:37.774898  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.775132  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:37.775318  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/id_rsa Username:docker}
	I1205 19:21:37.864963  549077 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:21:37.869224  549077 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 19:21:37.869250  549077 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 19:21:37.869346  549077 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 19:21:37.869450  549077 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 19:21:37.869464  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /etc/ssl/certs/5381862.pem
	I1205 19:21:37.869572  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 19:21:37.878920  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:21:37.904695  549077 start.go:296] duration metric: took 133.797994ms for postStartSetup
	I1205 19:21:37.904759  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetConfigRaw
	I1205 19:21:37.905447  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetIP
	I1205 19:21:37.908301  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.908672  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.908702  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.908956  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:21:37.909156  549077 start.go:128] duration metric: took 23.874183503s to createHost
	I1205 19:21:37.909187  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:37.911450  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.911786  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.911820  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.911891  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:37.912073  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.912217  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.912383  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:37.912551  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:21:37.912721  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.151 22 <nil> <nil>}
	I1205 19:21:37.912731  549077 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 19:21:38.013720  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733426497.965708253
	
	I1205 19:21:38.013754  549077 fix.go:216] guest clock: 1733426497.965708253
	I1205 19:21:38.013766  549077 fix.go:229] Guest: 2024-12-05 19:21:37.965708253 +0000 UTC Remote: 2024-12-05 19:21:37.909171964 +0000 UTC m=+152.282908362 (delta=56.536289ms)
	I1205 19:21:38.013790  549077 fix.go:200] guest clock delta is within tolerance: 56.536289ms
	I1205 19:21:38.013799  549077 start.go:83] releasing machines lock for "ha-106302-m03", held for 23.978946471s
	I1205 19:21:38.013827  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:38.014134  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetIP
	I1205 19:21:38.016789  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:38.017218  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:38.017243  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:38.019529  549077 out.go:177] * Found network options:
	I1205 19:21:38.020846  549077 out.go:177]   - NO_PROXY=192.168.39.185,192.168.39.22
	W1205 19:21:38.022010  549077 proxy.go:119] fail to check proxy env: Error ip not in block
	W1205 19:21:38.022031  549077 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 19:21:38.022044  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:38.022565  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:38.022780  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:38.022889  549077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:21:38.022930  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	W1205 19:21:38.022997  549077 proxy.go:119] fail to check proxy env: Error ip not in block
	W1205 19:21:38.023035  549077 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 19:21:38.023141  549077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:21:38.023159  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:38.025672  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:38.025960  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:38.026079  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:38.026109  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:38.026225  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:38.026344  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:38.026368  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:38.026432  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:38.026548  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:38.026555  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:38.026676  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:38.026727  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/id_rsa Username:docker}
	I1205 19:21:38.026820  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:38.026963  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/id_rsa Username:docker}
	I1205 19:21:38.262374  549077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 19:21:38.269119  549077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 19:21:38.269192  549077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:21:38.288736  549077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 19:21:38.288773  549077 start.go:495] detecting cgroup driver to use...
	I1205 19:21:38.288918  549077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:21:38.308145  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:21:38.324419  549077 docker.go:217] disabling cri-docker service (if available) ...
	I1205 19:21:38.324486  549077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:21:38.340495  549077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:21:38.356196  549077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:21:38.499051  549077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:21:38.664170  549077 docker.go:233] disabling docker service ...
	I1205 19:21:38.664261  549077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:21:38.679720  549077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:21:38.693887  549077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:21:38.835246  549077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:21:38.967777  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:21:38.984739  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:21:39.005139  549077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 19:21:39.005219  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:21:39.018668  549077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:21:39.018748  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:21:39.030582  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:21:39.042783  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:21:39.055956  549077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:21:39.068121  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:21:39.079421  549077 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:21:39.099262  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:21:39.112188  549077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:21:39.123835  549077 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 19:21:39.123897  549077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 19:21:39.142980  549077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:21:39.158784  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:21:39.282396  549077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:21:39.381886  549077 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:21:39.381979  549077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:21:39.387103  549077 start.go:563] Will wait 60s for crictl version
	I1205 19:21:39.387165  549077 ssh_runner.go:195] Run: which crictl
	I1205 19:21:39.391338  549077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:21:39.433516  549077 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 19:21:39.433618  549077 ssh_runner.go:195] Run: crio --version
	I1205 19:21:39.463442  549077 ssh_runner.go:195] Run: crio --version
	I1205 19:21:39.493740  549077 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 19:21:39.495019  549077 out.go:177]   - env NO_PROXY=192.168.39.185
	I1205 19:21:39.496240  549077 out.go:177]   - env NO_PROXY=192.168.39.185,192.168.39.22
	I1205 19:21:39.497508  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetIP
	I1205 19:21:39.500359  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:39.500726  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:39.500755  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:39.500911  549077 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 19:21:39.505557  549077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:21:39.519317  549077 mustload.go:65] Loading cluster: ha-106302
	I1205 19:21:39.519614  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:21:39.519880  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:21:39.519923  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:21:39.535653  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45847
	I1205 19:21:39.536186  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:21:39.536801  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:21:39.536826  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:21:39.537227  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:21:39.537444  549077 main.go:141] libmachine: (ha-106302) Calling .GetState
	I1205 19:21:39.538986  549077 host.go:66] Checking if "ha-106302" exists ...
	I1205 19:21:39.539332  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:21:39.539371  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:21:39.555429  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40715
	I1205 19:21:39.555999  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:21:39.556560  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:21:39.556589  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:21:39.556932  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:21:39.557156  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:21:39.557335  549077 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302 for IP: 192.168.39.151
	I1205 19:21:39.557356  549077 certs.go:194] generating shared ca certs ...
	I1205 19:21:39.557390  549077 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:21:39.557557  549077 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 19:21:39.557617  549077 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 19:21:39.557630  549077 certs.go:256] generating profile certs ...
	I1205 19:21:39.557734  549077 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key
	I1205 19:21:39.557771  549077 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.2331ea85
	I1205 19:21:39.557795  549077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.2331ea85 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.185 192.168.39.22 192.168.39.151 192.168.39.254]
	I1205 19:21:39.646088  549077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.2331ea85 ...
	I1205 19:21:39.646122  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.2331ea85: {Name:mkca6986931a87aa8d4bcffb8b1ac6412a83db65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:21:39.646289  549077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.2331ea85 ...
	I1205 19:21:39.646301  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.2331ea85: {Name:mke7f657c575646b15413aa5e5525c127a73d588 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:21:39.646374  549077 certs.go:381] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.2331ea85 -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt
	I1205 19:21:39.646516  549077 certs.go:385] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.2331ea85 -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key
	I1205 19:21:39.646682  549077 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key
	I1205 19:21:39.646703  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 19:21:39.646737  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 19:21:39.646758  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 19:21:39.646775  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 19:21:39.646792  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 19:21:39.646808  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 19:21:39.646827  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 19:21:39.660323  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 19:21:39.660454  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 19:21:39.660507  549077 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 19:21:39.660523  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 19:21:39.660561  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 19:21:39.660595  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:21:39.660628  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 19:21:39.660684  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:21:39.660725  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem -> /usr/share/ca-certificates/538186.pem
	I1205 19:21:39.660748  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /usr/share/ca-certificates/5381862.pem
	I1205 19:21:39.660768  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:21:39.660816  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:21:39.664340  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:21:39.664849  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:21:39.664879  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:21:39.665165  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:21:39.665411  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:21:39.665607  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:21:39.665765  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:21:39.748651  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1205 19:21:39.754014  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1205 19:21:39.766062  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1205 19:21:39.771674  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1205 19:21:39.784618  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1205 19:21:39.789041  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1205 19:21:39.802785  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1205 19:21:39.808595  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1205 19:21:39.822597  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1205 19:21:39.827169  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1205 19:21:39.839924  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1205 19:21:39.844630  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1205 19:21:39.865166  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:21:39.890669  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 19:21:39.914805  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:21:39.938866  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 19:21:39.964041  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1205 19:21:39.989973  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 19:21:40.017414  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:21:40.042496  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 19:21:40.067448  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 19:21:40.092444  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 19:21:40.118324  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:21:40.144679  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1205 19:21:40.162124  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1205 19:21:40.178895  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1205 19:21:40.196614  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1205 19:21:40.216743  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1205 19:21:40.236796  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1205 19:21:40.255368  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1205 19:21:40.272767  549077 ssh_runner.go:195] Run: openssl version
	I1205 19:21:40.279013  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 19:21:40.291865  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 19:21:40.297901  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 19:21:40.297969  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 19:21:40.305022  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 19:21:40.317671  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 19:21:40.330059  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 19:21:40.335215  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 19:21:40.335291  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 19:21:40.341648  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 19:21:40.353809  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:21:40.366241  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:21:40.371103  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:21:40.371178  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:21:40.377410  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:21:40.389484  549077 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 19:21:40.394089  549077 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 19:21:40.394159  549077 kubeadm.go:934] updating node {m03 192.168.39.151 8443 v1.31.2 crio true true} ...
	I1205 19:21:40.394281  549077 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-106302-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 19:21:40.394312  549077 kube-vip.go:115] generating kube-vip config ...
	I1205 19:21:40.394383  549077 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 19:21:40.412017  549077 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 19:21:40.412099  549077 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1205 19:21:40.412152  549077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 19:21:40.422903  549077 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1205 19:21:40.422982  549077 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1205 19:21:40.433537  549077 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1205 19:21:40.433551  549077 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1205 19:21:40.433572  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 19:21:40.433606  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:21:40.433603  549077 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1205 19:21:40.433634  549077 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 19:21:40.433638  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 19:21:40.433701  549077 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 19:21:40.452070  549077 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1205 19:21:40.452102  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 19:21:40.452118  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1205 19:21:40.452167  549077 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1205 19:21:40.452196  549077 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 19:21:40.452198  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1205 19:21:40.481457  549077 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1205 19:21:40.481500  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1205 19:21:41.411979  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1205 19:21:41.422976  549077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1205 19:21:41.442199  549077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 19:21:41.460832  549077 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1205 19:21:41.479070  549077 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 19:21:41.483375  549077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:21:41.497066  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:21:41.622952  549077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:21:41.643215  549077 host.go:66] Checking if "ha-106302" exists ...
	I1205 19:21:41.643585  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:21:41.643643  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:21:41.660142  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39403
	I1205 19:21:41.660811  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:21:41.661472  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:21:41.661507  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:21:41.661908  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:21:41.662156  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:21:41.663022  549077 start.go:317] joinCluster: &{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:21:41.663207  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1205 19:21:41.663239  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:21:41.666973  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:21:41.667413  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:21:41.667445  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:21:41.667629  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:21:41.667805  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:21:41.667958  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:21:41.668092  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:21:41.845827  549077 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:21:41.845894  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bitrl5.l9o7pcy69k2x0m8f --discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-106302-m03 --control-plane --apiserver-advertise-address=192.168.39.151 --apiserver-bind-port=8443"
	I1205 19:22:05.091694  549077 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bitrl5.l9o7pcy69k2x0m8f --discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-106302-m03 --control-plane --apiserver-advertise-address=192.168.39.151 --apiserver-bind-port=8443": (23.245742289s)
	I1205 19:22:05.091745  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1205 19:22:05.651069  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-106302-m03 minikube.k8s.io/updated_at=2024_12_05T19_22_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=ha-106302 minikube.k8s.io/primary=false
	I1205 19:22:05.805746  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-106302-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1205 19:22:05.942387  549077 start.go:319] duration metric: took 24.279360239s to joinCluster
	I1205 19:22:05.942527  549077 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:22:05.942909  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:22:05.943936  549077 out.go:177] * Verifying Kubernetes components...
	I1205 19:22:05.945223  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:22:06.284991  549077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:22:06.343812  549077 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:22:06.344263  549077 kapi.go:59] client config for ha-106302: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.crt", KeyFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key", CAFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1205 19:22:06.344398  549077 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.185:8443
	I1205 19:22:06.344797  549077 node_ready.go:35] waiting up to 6m0s for node "ha-106302-m03" to be "Ready" ...
	I1205 19:22:06.344937  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:06.344951  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:06.344962  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:06.344969  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:06.358416  549077 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1205 19:22:06.845609  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:06.845637  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:06.845650  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:06.845657  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:06.850140  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:07.345201  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:07.345229  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:07.345238  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:07.345242  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:07.349137  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:07.845591  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:07.845615  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:07.845624  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:07.845628  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:07.849417  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:08.345109  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:08.345139  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:08.345151  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:08.345155  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:08.349617  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:08.350266  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:08.845598  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:08.845626  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:08.845638  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:08.845643  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:08.849144  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:09.345621  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:09.345646  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:09.345656  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:09.345660  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:09.349983  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:09.845757  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:09.845782  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:09.845790  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:09.845794  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:09.849681  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:10.345604  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:10.345635  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:10.345648  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:10.345654  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:10.349727  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:10.350478  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:10.845342  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:10.845367  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:10.845376  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:10.845381  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:10.848990  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:11.346073  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:11.346097  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:11.346105  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:11.346109  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:11.350613  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:11.845378  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:11.845411  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:11.845426  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:11.845434  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:11.849253  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:12.345303  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:12.345337  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:12.345349  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:12.345358  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:12.352355  549077 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 19:22:12.353182  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:12.845552  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:12.845581  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:12.845591  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:12.845595  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:12.849732  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:13.345587  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:13.345613  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:13.345623  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:13.345629  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:13.349259  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:13.845165  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:13.845197  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:13.845209  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:13.845214  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:13.849815  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:14.345423  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:14.345458  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:14.345471  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:14.345480  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:14.353042  549077 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1205 19:22:14.353960  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:14.845215  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:14.845239  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:14.845248  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:14.845252  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:14.848681  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:15.345651  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:15.345681  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:15.345699  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:15.345706  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:15.349604  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:15.845599  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:15.845627  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:15.845637  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:15.845641  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:15.849736  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:16.345974  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:16.346003  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:16.346012  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:16.346017  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:16.350399  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:16.845026  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:16.845057  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:16.845067  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:16.845071  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:16.848713  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:16.849459  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:17.345612  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:17.345660  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:17.345688  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:17.345700  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:17.349461  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:17.845355  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:17.845379  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:17.845388  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:17.845392  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:17.851232  549077 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 19:22:18.346074  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:18.346098  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:18.346107  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:18.346112  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:18.350327  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:18.845241  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:18.845266  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:18.845273  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:18.845277  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:18.848579  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:18.849652  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:19.345480  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:19.345506  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:19.345515  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:19.345519  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:19.349757  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:19.845572  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:19.845597  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:19.845606  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:19.845621  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:19.849116  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:20.345089  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:20.345113  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:20.345121  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:20.345126  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:20.348890  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:20.846039  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:20.846062  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:20.846070  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:20.846075  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:20.850247  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:20.850972  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:21.345329  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:21.345370  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:21.345381  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:21.345387  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:21.349225  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:21.845571  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:21.845604  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:21.845616  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:21.845622  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:21.849183  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:22.345428  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:22.345453  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:22.345461  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:22.345466  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:22.349371  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:22.845510  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:22.845534  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:22.845543  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:22.845549  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:22.849220  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:23.345442  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:23.345470  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:23.345479  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:23.345484  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:23.349347  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:23.350300  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:23.845549  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:23.845574  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:23.845582  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:23.845587  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:23.849893  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:24.345261  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:24.345292  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:24.345302  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:24.345306  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:24.349136  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:24.845545  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:24.845574  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:24.845583  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:24.845586  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:24.849619  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:25.345655  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:25.345687  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.345745  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.345781  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.349427  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:25.350218  549077 node_ready.go:49] node "ha-106302-m03" has status "Ready":"True"
	I1205 19:22:25.350237  549077 node_ready.go:38] duration metric: took 19.005417749s for node "ha-106302-m03" to be "Ready" ...
	I1205 19:22:25.350247  549077 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:22:25.350324  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:22:25.350335  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.350342  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.350347  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.358969  549077 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1205 19:22:25.365676  549077 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-45m77" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.365768  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-45m77
	I1205 19:22:25.365777  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.365785  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.365790  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.369626  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:25.370252  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:25.370268  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.370276  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.370280  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.373604  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:25.374401  549077 pod_ready.go:93] pod "coredns-7c65d6cfc9-45m77" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:25.374417  549077 pod_ready.go:82] duration metric: took 8.712508ms for pod "coredns-7c65d6cfc9-45m77" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.374426  549077 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sjsv2" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.374491  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sjsv2
	I1205 19:22:25.374498  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.374505  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.374510  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.377314  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:22:25.378099  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:25.378115  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.378125  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.378130  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.380745  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:22:25.381330  549077 pod_ready.go:93] pod "coredns-7c65d6cfc9-sjsv2" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:25.381354  549077 pod_ready.go:82] duration metric: took 6.920357ms for pod "coredns-7c65d6cfc9-sjsv2" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.381366  549077 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.381430  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/etcd-ha-106302
	I1205 19:22:25.381437  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.381445  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.381452  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.384565  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:25.385119  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:25.385140  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.385150  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.385156  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.387832  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:22:25.388313  549077 pod_ready.go:93] pod "etcd-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:25.388334  549077 pod_ready.go:82] duration metric: took 6.95931ms for pod "etcd-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.388344  549077 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.388405  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/etcd-ha-106302-m02
	I1205 19:22:25.388413  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.388420  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.388426  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.390958  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:22:25.391627  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:25.391646  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.391657  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.391664  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.394336  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:22:25.394843  549077 pod_ready.go:93] pod "etcd-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:25.394860  549077 pod_ready.go:82] duration metric: took 6.510348ms for pod "etcd-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.394870  549077 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.546322  549077 request.go:632] Waited for 151.362843ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/etcd-ha-106302-m03
	I1205 19:22:25.546441  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/etcd-ha-106302-m03
	I1205 19:22:25.546457  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.546468  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.546478  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.551505  549077 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 19:22:25.746379  549077 request.go:632] Waited for 194.045637ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:25.746447  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:25.746452  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.746460  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.746465  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.749940  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:25.750364  549077 pod_ready.go:93] pod "etcd-ha-106302-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:25.750384  549077 pod_ready.go:82] duration metric: took 355.50711ms for pod "etcd-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.750410  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.945946  549077 request.go:632] Waited for 195.44547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302
	I1205 19:22:25.946012  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302
	I1205 19:22:25.946017  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.946026  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.946031  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.949896  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:26.146187  549077 request.go:632] Waited for 195.303913ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:26.146261  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:26.146266  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:26.146281  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:26.146284  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:26.150155  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:26.150850  549077 pod_ready.go:93] pod "kube-apiserver-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:26.150872  549077 pod_ready.go:82] duration metric: took 400.452175ms for pod "kube-apiserver-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:26.150884  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:26.346018  549077 request.go:632] Waited for 195.032626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302-m02
	I1205 19:22:26.346106  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302-m02
	I1205 19:22:26.346114  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:26.346126  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:26.346134  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:26.350215  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:26.546617  549077 request.go:632] Waited for 195.375501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:26.546704  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:26.546710  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:26.546718  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:26.546722  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:26.550695  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:26.551267  549077 pod_ready.go:93] pod "kube-apiserver-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:26.551288  549077 pod_ready.go:82] duration metric: took 400.395912ms for pod "kube-apiserver-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:26.551301  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:26.746009  549077 request.go:632] Waited for 194.599498ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302-m03
	I1205 19:22:26.746081  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302-m03
	I1205 19:22:26.746088  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:26.746096  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:26.746102  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:26.750448  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:26.945801  549077 request.go:632] Waited for 194.318273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:26.945876  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:26.945882  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:26.945893  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:26.945901  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:26.949211  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:26.949781  549077 pod_ready.go:93] pod "kube-apiserver-ha-106302-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:26.949807  549077 pod_ready.go:82] duration metric: took 398.493465ms for pod "kube-apiserver-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:26.949821  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:27.145762  549077 request.go:632] Waited for 195.843082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302
	I1205 19:22:27.145841  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302
	I1205 19:22:27.145847  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:27.145856  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:27.145863  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:27.150825  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:27.346689  549077 request.go:632] Waited for 195.243035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:27.346772  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:27.346785  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:27.346804  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:27.346815  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:27.350485  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:27.351090  549077 pod_ready.go:93] pod "kube-controller-manager-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:27.351111  549077 pod_ready.go:82] duration metric: took 401.282274ms for pod "kube-controller-manager-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:27.351122  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:27.546113  549077 request.go:632] Waited for 194.908111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302-m02
	I1205 19:22:27.546216  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302-m02
	I1205 19:22:27.546228  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:27.546241  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:27.546255  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:27.550360  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:27.746526  549077 request.go:632] Waited for 195.360331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:27.746617  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:27.746626  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:27.746635  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:27.746640  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:27.753462  549077 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 19:22:27.754708  549077 pod_ready.go:93] pod "kube-controller-manager-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:27.754735  549077 pod_ready.go:82] duration metric: took 403.601936ms for pod "kube-controller-manager-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:27.754750  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:27.945674  549077 request.go:632] Waited for 190.826423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302-m03
	I1205 19:22:27.945746  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302-m03
	I1205 19:22:27.945752  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:27.945760  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:27.945764  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:27.949668  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:28.146444  549077 request.go:632] Waited for 195.387763ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:28.146510  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:28.146515  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:28.146523  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:28.146535  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:28.150750  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:28.151357  549077 pod_ready.go:93] pod "kube-controller-manager-ha-106302-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:28.151381  549077 pod_ready.go:82] duration metric: took 396.622007ms for pod "kube-controller-manager-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:28.151393  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n57lf" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:28.345948  549077 request.go:632] Waited for 194.471828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n57lf
	I1205 19:22:28.346043  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n57lf
	I1205 19:22:28.346051  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:28.346059  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:28.346064  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:28.350114  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:28.546260  549077 request.go:632] Waited for 195.407825ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:28.546369  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:28.546382  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:28.546394  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:28.546413  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:28.551000  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:28.551628  549077 pod_ready.go:93] pod "kube-proxy-n57lf" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:28.551654  549077 pod_ready.go:82] duration metric: took 400.254319ms for pod "kube-proxy-n57lf" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:28.551666  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pghdx" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:28.746587  549077 request.go:632] Waited for 194.82213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pghdx
	I1205 19:22:28.746705  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pghdx
	I1205 19:22:28.746718  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:28.746727  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:28.746737  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:28.750453  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:28.946581  549077 request.go:632] Waited for 195.373436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:28.946682  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:28.946693  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:28.946704  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:28.946714  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:28.949892  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:28.950341  549077 pod_ready.go:93] pod "kube-proxy-pghdx" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:28.950360  549077 pod_ready.go:82] duration metric: took 398.68655ms for pod "kube-proxy-pghdx" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:28.950370  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zw6nj" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:29.145964  549077 request.go:632] Waited for 195.515335ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zw6nj
	I1205 19:22:29.146035  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zw6nj
	I1205 19:22:29.146042  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:29.146052  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:29.146058  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:29.149161  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:29.346356  549077 request.go:632] Waited for 196.408917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:29.346467  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:29.346475  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:29.346505  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:29.346577  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:29.350334  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:29.351251  549077 pod_ready.go:93] pod "kube-proxy-zw6nj" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:29.351290  549077 pod_ready.go:82] duration metric: took 400.913186ms for pod "kube-proxy-zw6nj" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:29.351307  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:29.545602  549077 request.go:632] Waited for 194.210598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302
	I1205 19:22:29.545674  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302
	I1205 19:22:29.545682  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:29.545694  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:29.545705  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:29.549980  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:29.746034  549077 request.go:632] Waited for 195.473431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:29.746121  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:29.746128  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:29.746140  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:29.746148  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:29.750509  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:29.751460  549077 pod_ready.go:93] pod "kube-scheduler-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:29.751481  549077 pod_ready.go:82] duration metric: took 400.162109ms for pod "kube-scheduler-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:29.751493  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:29.946019  549077 request.go:632] Waited for 194.44438ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302-m02
	I1205 19:22:29.946119  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302-m02
	I1205 19:22:29.946131  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:29.946140  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:29.946148  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:29.949224  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:30.146466  549077 request.go:632] Waited for 196.38785ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:30.146542  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:30.146550  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:30.146562  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:30.146575  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:30.150163  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:30.150654  549077 pod_ready.go:93] pod "kube-scheduler-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:30.150677  549077 pod_ready.go:82] duration metric: took 399.174639ms for pod "kube-scheduler-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:30.150688  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:30.346682  549077 request.go:632] Waited for 195.915039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302-m03
	I1205 19:22:30.346759  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302-m03
	I1205 19:22:30.346764  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:30.346773  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:30.346788  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:30.350596  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:30.545763  549077 request.go:632] Waited for 194.297931ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:30.545847  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:30.545854  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:30.545865  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:30.545873  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:30.549623  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:30.550473  549077 pod_ready.go:93] pod "kube-scheduler-ha-106302-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:30.550494  549077 pod_ready.go:82] duration metric: took 399.800176ms for pod "kube-scheduler-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:30.550505  549077 pod_ready.go:39] duration metric: took 5.200248716s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:22:30.550539  549077 api_server.go:52] waiting for apiserver process to appear ...
	I1205 19:22:30.550598  549077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 19:22:30.565872  549077 api_server.go:72] duration metric: took 24.623303746s to wait for apiserver process to appear ...
	I1205 19:22:30.565908  549077 api_server.go:88] waiting for apiserver healthz status ...
	I1205 19:22:30.565931  549077 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I1205 19:22:30.570332  549077 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I1205 19:22:30.570415  549077 round_trippers.go:463] GET https://192.168.39.185:8443/version
	I1205 19:22:30.570426  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:30.570440  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:30.570444  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:30.571545  549077 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:22:30.571615  549077 api_server.go:141] control plane version: v1.31.2
	I1205 19:22:30.571635  549077 api_server.go:131] duration metric: took 5.719204ms to wait for apiserver health ...
	I1205 19:22:30.571664  549077 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 19:22:30.746133  549077 request.go:632] Waited for 174.37713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:22:30.746217  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:22:30.746231  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:30.746244  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:30.746251  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:30.753131  549077 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 19:22:30.760159  549077 system_pods.go:59] 24 kube-system pods found
	I1205 19:22:30.760194  549077 system_pods.go:61] "coredns-7c65d6cfc9-45m77" [88196078-5292-43dc-84b2-dc53af435e5c] Running
	I1205 19:22:30.760202  549077 system_pods.go:61] "coredns-7c65d6cfc9-sjsv2" [b686cbc5-1b4f-44ea-89cb-70063b687718] Running
	I1205 19:22:30.760208  549077 system_pods.go:61] "etcd-ha-106302" [b0c81234-5186-4812-a1a2-4f035f9efabf] Running
	I1205 19:22:30.760214  549077 system_pods.go:61] "etcd-ha-106302-m02" [8c619411-697a-4eb0-8725-27811a17aba1] Running
	I1205 19:22:30.760219  549077 system_pods.go:61] "etcd-ha-106302-m03" [08e9ef91-8e16-4ff1-a2df-8275e72a5697] Running
	I1205 19:22:30.760224  549077 system_pods.go:61] "kindnet-thcsp" [e2eec41c-3ca9-42ff-801d-dfdf05f6eab2] Running
	I1205 19:22:30.760228  549077 system_pods.go:61] "kindnet-wdsv9" [83d82f5d-42c3-47be-af20-41b82c16b114] Running
	I1205 19:22:30.760233  549077 system_pods.go:61] "kindnet-xr9mh" [2044800c-f517-439e-810b-71a114cb044e] Running
	I1205 19:22:30.760238  549077 system_pods.go:61] "kube-apiserver-ha-106302" [688ddac9-2f42-4e6b-b9e8-a9c967a7180b] Running
	I1205 19:22:30.760243  549077 system_pods.go:61] "kube-apiserver-ha-106302-m02" [ad05d27e-72e0-443e-8ad3-2d464c116f27] Running
	I1205 19:22:30.760249  549077 system_pods.go:61] "kube-apiserver-ha-106302-m03" [398242aa-f015-47ca-9132-23412c52878d] Running
	I1205 19:22:30.760254  549077 system_pods.go:61] "kube-controller-manager-ha-106302" [e63c5a4d-c327-4040-b679-62b5b06abec9] Running
	I1205 19:22:30.760259  549077 system_pods.go:61] "kube-controller-manager-ha-106302-m02" [fe707148-d0c6-4de3-841f-3a8143fa9217] Running
	I1205 19:22:30.760288  549077 system_pods.go:61] "kube-controller-manager-ha-106302-m03" [8af17291-c1b7-417f-a2dd-5a00ca58b07e] Running
	I1205 19:22:30.760294  549077 system_pods.go:61] "kube-proxy-n57lf" [94819792-89fc-4a70-a54f-02e594b657bf] Running
	I1205 19:22:30.760300  549077 system_pods.go:61] "kube-proxy-pghdx" [915060a3-353c-4a2c-a9d6-494206776446] Running
	I1205 19:22:30.760306  549077 system_pods.go:61] "kube-proxy-zw6nj" [d35e1426-9151-4eb3-95fd-c2b36c126b51] Running
	I1205 19:22:30.760312  549077 system_pods.go:61] "kube-scheduler-ha-106302" [6dd32258-0ba3-4f79-8d4b-165b918bbc36] Running
	I1205 19:22:30.760321  549077 system_pods.go:61] "kube-scheduler-ha-106302-m02" [b94b6bf9-4639-47d1-92be-0cbba44e65f3] Running
	I1205 19:22:30.760327  549077 system_pods.go:61] "kube-scheduler-ha-106302-m03" [1b601e0c-59c7-4248-b29c-44d19934f590] Running
	I1205 19:22:30.760333  549077 system_pods.go:61] "kube-vip-ha-106302" [03b99453-c78d-4aaf-93e8-7011ae363db4] Running
	I1205 19:22:30.760339  549077 system_pods.go:61] "kube-vip-ha-106302-m02" [2ec94818-bc15-4d60-95b4-e7f7235f0341] Running
	I1205 19:22:30.760347  549077 system_pods.go:61] "kube-vip-ha-106302-m03" [6e511769-148e-43eb-a4bb-6dd72dfcd11d] Running
	I1205 19:22:30.760352  549077 system_pods.go:61] "storage-provisioner" [88d6e224-b304-4f84-a162-9803400c9acf] Running
	I1205 19:22:30.760361  549077 system_pods.go:74] duration metric: took 188.685514ms to wait for pod list to return data ...
	I1205 19:22:30.760375  549077 default_sa.go:34] waiting for default service account to be created ...
	I1205 19:22:30.946070  549077 request.go:632] Waited for 185.595824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/default/serviceaccounts
	I1205 19:22:30.946137  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/default/serviceaccounts
	I1205 19:22:30.946142  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:30.946151  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:30.946159  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:30.950732  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:30.950901  549077 default_sa.go:45] found service account: "default"
	I1205 19:22:30.950919  549077 default_sa.go:55] duration metric: took 190.53748ms for default service account to be created ...
	I1205 19:22:30.950929  549077 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 19:22:31.146374  549077 request.go:632] Waited for 195.332956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:22:31.146437  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:22:31.146443  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:31.146451  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:31.146456  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:31.153763  549077 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1205 19:22:31.160825  549077 system_pods.go:86] 24 kube-system pods found
	I1205 19:22:31.160858  549077 system_pods.go:89] "coredns-7c65d6cfc9-45m77" [88196078-5292-43dc-84b2-dc53af435e5c] Running
	I1205 19:22:31.160865  549077 system_pods.go:89] "coredns-7c65d6cfc9-sjsv2" [b686cbc5-1b4f-44ea-89cb-70063b687718] Running
	I1205 19:22:31.160869  549077 system_pods.go:89] "etcd-ha-106302" [b0c81234-5186-4812-a1a2-4f035f9efabf] Running
	I1205 19:22:31.160874  549077 system_pods.go:89] "etcd-ha-106302-m02" [8c619411-697a-4eb0-8725-27811a17aba1] Running
	I1205 19:22:31.160878  549077 system_pods.go:89] "etcd-ha-106302-m03" [08e9ef91-8e16-4ff1-a2df-8275e72a5697] Running
	I1205 19:22:31.160882  549077 system_pods.go:89] "kindnet-thcsp" [e2eec41c-3ca9-42ff-801d-dfdf05f6eab2] Running
	I1205 19:22:31.160888  549077 system_pods.go:89] "kindnet-wdsv9" [83d82f5d-42c3-47be-af20-41b82c16b114] Running
	I1205 19:22:31.160893  549077 system_pods.go:89] "kindnet-xr9mh" [2044800c-f517-439e-810b-71a114cb044e] Running
	I1205 19:22:31.160900  549077 system_pods.go:89] "kube-apiserver-ha-106302" [688ddac9-2f42-4e6b-b9e8-a9c967a7180b] Running
	I1205 19:22:31.160908  549077 system_pods.go:89] "kube-apiserver-ha-106302-m02" [ad05d27e-72e0-443e-8ad3-2d464c116f27] Running
	I1205 19:22:31.160914  549077 system_pods.go:89] "kube-apiserver-ha-106302-m03" [398242aa-f015-47ca-9132-23412c52878d] Running
	I1205 19:22:31.160925  549077 system_pods.go:89] "kube-controller-manager-ha-106302" [e63c5a4d-c327-4040-b679-62b5b06abec9] Running
	I1205 19:22:31.160931  549077 system_pods.go:89] "kube-controller-manager-ha-106302-m02" [fe707148-d0c6-4de3-841f-3a8143fa9217] Running
	I1205 19:22:31.160937  549077 system_pods.go:89] "kube-controller-manager-ha-106302-m03" [8af17291-c1b7-417f-a2dd-5a00ca58b07e] Running
	I1205 19:22:31.160946  549077 system_pods.go:89] "kube-proxy-n57lf" [94819792-89fc-4a70-a54f-02e594b657bf] Running
	I1205 19:22:31.160950  549077 system_pods.go:89] "kube-proxy-pghdx" [915060a3-353c-4a2c-a9d6-494206776446] Running
	I1205 19:22:31.160956  549077 system_pods.go:89] "kube-proxy-zw6nj" [d35e1426-9151-4eb3-95fd-c2b36c126b51] Running
	I1205 19:22:31.160960  549077 system_pods.go:89] "kube-scheduler-ha-106302" [6dd32258-0ba3-4f79-8d4b-165b918bbc36] Running
	I1205 19:22:31.160970  549077 system_pods.go:89] "kube-scheduler-ha-106302-m02" [b94b6bf9-4639-47d1-92be-0cbba44e65f3] Running
	I1205 19:22:31.160976  549077 system_pods.go:89] "kube-scheduler-ha-106302-m03" [1b601e0c-59c7-4248-b29c-44d19934f590] Running
	I1205 19:22:31.160979  549077 system_pods.go:89] "kube-vip-ha-106302" [03b99453-c78d-4aaf-93e8-7011ae363db4] Running
	I1205 19:22:31.160985  549077 system_pods.go:89] "kube-vip-ha-106302-m02" [2ec94818-bc15-4d60-95b4-e7f7235f0341] Running
	I1205 19:22:31.160989  549077 system_pods.go:89] "kube-vip-ha-106302-m03" [6e511769-148e-43eb-a4bb-6dd72dfcd11d] Running
	I1205 19:22:31.160992  549077 system_pods.go:89] "storage-provisioner" [88d6e224-b304-4f84-a162-9803400c9acf] Running
	I1205 19:22:31.161001  549077 system_pods.go:126] duration metric: took 210.065272ms to wait for k8s-apps to be running ...
	I1205 19:22:31.161014  549077 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 19:22:31.161075  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:22:31.179416  549077 system_svc.go:56] duration metric: took 18.393613ms WaitForService to wait for kubelet
	I1205 19:22:31.179447  549077 kubeadm.go:582] duration metric: took 25.236889217s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:22:31.179468  549077 node_conditions.go:102] verifying NodePressure condition ...
	I1205 19:22:31.345848  549077 request.go:632] Waited for 166.292279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes
	I1205 19:22:31.345915  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes
	I1205 19:22:31.345920  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:31.345937  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:31.345942  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:31.350337  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:31.351373  549077 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 19:22:31.351397  549077 node_conditions.go:123] node cpu capacity is 2
	I1205 19:22:31.351414  549077 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 19:22:31.351420  549077 node_conditions.go:123] node cpu capacity is 2
	I1205 19:22:31.351426  549077 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 19:22:31.351430  549077 node_conditions.go:123] node cpu capacity is 2
	I1205 19:22:31.351436  549077 node_conditions.go:105] duration metric: took 171.962205ms to run NodePressure ...
	I1205 19:22:31.351452  549077 start.go:241] waiting for startup goroutines ...
	I1205 19:22:31.351479  549077 start.go:255] writing updated cluster config ...
	I1205 19:22:31.351794  549077 ssh_runner.go:195] Run: rm -f paused
	I1205 19:22:31.407206  549077 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 19:22:31.410298  549077 out.go:177] * Done! kubectl is now configured to use "ha-106302" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 19:26:25 ha-106302 crio[666]: time="2024-12-05 19:26:25.359781987Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:619925cbc39c69135172b7e76775b358b55fa47d57b5dfe0f03a5194c0692777,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-p8z47,Uid:16e14c1a-196d-42a8-b245-1a488cb9667f,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733426554244352667,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-p8z47,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16e14c1a-196d-42a8-b245-1a488cb9667f,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T19:22:32.428915145Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:95ad32628ed378cf8fe1c9cacc2bc59fc6969dc4a22ed2e11cbc6aa11f389771,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-sjsv2,Uid:b686cbc5-1b4f-44ea-89cb-70063b687718,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1733426408802580132,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-sjsv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686cbc5-1b4f-44ea-89cb-70063b687718,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T19:20:08.193477771Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:79783fce24db9824c8762aa0ebc246441d34d9d16f5b46829b9e44cac750e5b5,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-45m77,Uid:88196078-5292-43dc-84b2-dc53af435e5c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733426408524385184,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-45m77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88196078-5292-43dc-84b2-dc53af435e5c,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2
024-12-05T19:20:08.202810948Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ba65941872158b7f807f5608fbad458facee98a81f1ec1014ac383579eda3127,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:88d6e224-b304-4f84-a162-9803400c9acf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733426408521702738,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d6e224-b304-4f84-a162-9803400c9acf,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"im
age\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-12-05T19:20:08.200228348Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5f62be7378940215f775ba016eaaba9e085a5bde8d5f3bd2af7af71b2a161ba1,Metadata:&PodSandboxMetadata{Name:kindnet-xr9mh,Uid:2044800c-f517-439e-810b-71a114cb044e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733426392717197693,Labels:map[string]string{app: kindnet,controller-revision-hash: 65ddb8b87b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-xr9mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2044800c-f517-439e-810b-71a114cb044e,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-12-05T19:19:52.107703830Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dc8d6361e49728eaa41e23a1d93aa34cfaa625af82fcfa2a884dd3b4f2b81c55,Metadata:&PodSandboxMetadata{Name:kube-proxy-zw6nj,Uid:d35e1426-9151-4eb3-95fd-c2b36c126b51,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733426392437032465,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-zw6nj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d35e1426-9151-4eb3-95fd-c2b36c126b51,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T19:19:52.116031311Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:411118291d3f33b6d7f7a80f545d0dfdb0f0d3142d4ff4deb2a42c08e68de419,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-106302,Uid:b7aeab01bb9a2149eedec308e9c9b613,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1733426381216045510,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7aeab01bb9a2149eedec308e9c9b613,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b7aeab01bb9a2149eedec308e9c9b613,kubernetes.io/config.seen: 2024-12-05T19:19:40.732056443Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3cfec88984b8a0d72e94319ba62e7d4ab919d47ac556a084a2d6737ebd823e2e,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-106302,Uid:94f9241c16c5e3fb852233a6fe3994b7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733426381213053754,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94f9241c16c5e3fb852233a6fe3994b7,},Annotations:map[string]string{kubernetes.io/config.hash: 94f9
241c16c5e3fb852233a6fe3994b7,kubernetes.io/config.seen: 2024-12-05T19:19:40.732057185Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c920b14cf50aa8ed9c35f9a67d873d3358f3e00a98649b822dcaf888ea4820e2,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-106302,Uid:cd6cd909fedaf70356c0cea88a63589f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733426381184666410,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6cd909fedaf70356c0cea88a63589f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.185:8443,kubernetes.io/config.hash: cd6cd909fedaf70356c0cea88a63589f,kubernetes.io/config.seen: 2024-12-05T19:19:40.732053950Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:594e9eb586b3236ea16c3700fc2cd0993924c9f7621e0cdde654b8062e9216ed,Met
adata:&PodSandboxMetadata{Name:etcd-ha-106302,Uid:44e395bdaa0336ddb64b019178e9d783,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733426381182882639,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44e395bdaa0336ddb64b019178e9d783,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.185:2379,kubernetes.io/config.hash: 44e395bdaa0336ddb64b019178e9d783,kubernetes.io/config.seen: 2024-12-05T19:19:40.732049838Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:890699ae2c7d2cae9c6665fe590a645df186a046d832ec79a134309fabab3c04,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-106302,Uid:112c68d960b3bd38f8fac52ec570505b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733426381180732950,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.c
ontainer.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112c68d960b3bd38f8fac52ec570505b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 112c68d960b3bd38f8fac52ec570505b,kubernetes.io/config.seen: 2024-12-05T19:19:40.732055253Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=b28b788c-55fb-4076-89bb-260d09f03978 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 05 19:26:25 ha-106302 crio[666]: time="2024-12-05 19:26:25.360478175Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c7206131-4833-47cb-af1b-68b21c89150a name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:25 ha-106302 crio[666]: time="2024-12-05 19:26:25.360737960Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c7206131-4833-47cb-af1b-68b21c89150a name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:25 ha-106302 crio[666]: time="2024-12-05 19:26:25.360972831Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8175779cb574608a2e0e051ddf4963e3b0f7f7b3a0bb6082137a16800a03a08e,PodSandboxId:619925cbc39c69135172b7e76775b358b55fa47d57b5dfe0f03a5194c0692777,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733426557247240128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-p8z47,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16e14c1a-196d-42a8-b245-1a488cb9667f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7af42dff52cf31e3d0b4c5b3bb3039a69b066d99b6f46d065147ba29c75204b,PodSandboxId:95ad32628ed378cf8fe1c9cacc2bc59fc6969dc4a22ed2e11cbc6aa11f389771,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426409026160454,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sjsv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686cbc5-1b4f-44ea-89cb-70063b687718,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71878f2ac51cecfe539f367c2ff49f6bc6b40022a7dff189245bd007d0260d07,PodSandboxId:79783fce24db9824c8762aa0ebc246441d34d9d16f5b46829b9e44cac750e5b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426408724382293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-45m77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
88196078-5292-43dc-84b2-dc53af435e5c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a647561fc8a8150a221a7d9831dde01fe407024d413eda1a607ac294e573764b,PodSandboxId:ba65941872158b7f807f5608fbad458facee98a81f1ec1014ac383579eda3127,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733426408698615726,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d6e224-b304-4f84-a162-9803400c9acf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0e4de270d59927c1fd98dfbfca5bebec8750f72b7682863f1276e5cf4afe0e,PodSandboxId:5f62be7378940215f775ba016eaaba9e085a5bde8d5f3bd2af7af71b2a161ba1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733426396906111541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xr9mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2044800c-f517-439e-810b-71a114cb044e,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013c8063671c4aa3ba3a414d06a2537ce811bcd6e22e028d0ad8ab9af659022d,PodSandboxId:dc8d6361e49728eaa41e23a1d93aa34cfaa625af82fcfa2a884dd3b4f2b81c55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733426392
646389922,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw6nj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d35e1426-9151-4eb3-95fd-c2b36c126b51,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a639bf005af2020a5321599ccc56f99bd4c5be6aa0c227a6310955274ec60e3e,PodSandboxId:3cfec88984b8a0d72e94319ba62e7d4ab919d47ac556a084a2d6737ebd823e2e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173342638480
0708772,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94f9241c16c5e3fb852233a6fe3994b7,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73802addf28ef6b673245e1309d4d82c07c43374f514f1031e2a8277b4641e1a,PodSandboxId:594e9eb586b3236ea16c3700fc2cd0993924c9f7621e0cdde654b8062e9216ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733426381465280845,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44e395bdaa0336ddb64b019178e9d783,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7fcd5f7d56deb9c9698f0941fa3b61d597efc9495ed27488a425d6030baa44,PodSandboxId:c920b14cf50aa8ed9c35f9a67d873d3358f3e00a98649b822dcaf888ea4820e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733426381444138208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6cd909fedaf70356c0cea88a63589f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dec1697264029fa87be97fc70c56ce04eba1e67864a4b1b1f1e47cba052f7cf8,PodSandboxId:411118291d3f33b6d7f7a80f545d0dfdb0f0d3142d4ff4deb2a42c08e68de419,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733426381437294125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7aeab01bb9a2149eedec308e9c9b613,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c251344563e4644b942bcb793dd412b7fae15eefbb4142b68e4047db60a8fbeb,PodSandboxId:890699ae2c7d2cae9c6665fe590a645df186a046d832ec79a134309fabab3c04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733426381376403502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112c68d960b3bd38f8fac52ec570505b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c7206131-4833-47cb-af1b-68b21c89150a name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:25 ha-106302 crio[666]: time="2024-12-05 19:26:25.371906880Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c8d5d7f4-70af-4ea4-bd68-8ca572bfafb4 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:26:25 ha-106302 crio[666]: time="2024-12-05 19:26:25.371975701Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c8d5d7f4-70af-4ea4-bd68-8ca572bfafb4 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:26:25 ha-106302 crio[666]: time="2024-12-05 19:26:25.373418195Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a0878003-b00e-4141-a0d9-5aebe3e1dc21 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:26:25 ha-106302 crio[666]: time="2024-12-05 19:26:25.374589415Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426785374468110,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a0878003-b00e-4141-a0d9-5aebe3e1dc21 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:26:25 ha-106302 crio[666]: time="2024-12-05 19:26:25.375445055Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e6a5cf9-76a3-4448-9202-8b69b8e75b67 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:25 ha-106302 crio[666]: time="2024-12-05 19:26:25.375587927Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e6a5cf9-76a3-4448-9202-8b69b8e75b67 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:25 ha-106302 crio[666]: time="2024-12-05 19:26:25.375871802Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8175779cb574608a2e0e051ddf4963e3b0f7f7b3a0bb6082137a16800a03a08e,PodSandboxId:619925cbc39c69135172b7e76775b358b55fa47d57b5dfe0f03a5194c0692777,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733426557247240128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-p8z47,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16e14c1a-196d-42a8-b245-1a488cb9667f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7af42dff52cf31e3d0b4c5b3bb3039a69b066d99b6f46d065147ba29c75204b,PodSandboxId:95ad32628ed378cf8fe1c9cacc2bc59fc6969dc4a22ed2e11cbc6aa11f389771,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426409026160454,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sjsv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686cbc5-1b4f-44ea-89cb-70063b687718,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71878f2ac51cecfe539f367c2ff49f6bc6b40022a7dff189245bd007d0260d07,PodSandboxId:79783fce24db9824c8762aa0ebc246441d34d9d16f5b46829b9e44cac750e5b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426408724382293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-45m77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
88196078-5292-43dc-84b2-dc53af435e5c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a647561fc8a8150a221a7d9831dde01fe407024d413eda1a607ac294e573764b,PodSandboxId:ba65941872158b7f807f5608fbad458facee98a81f1ec1014ac383579eda3127,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733426408698615726,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d6e224-b304-4f84-a162-9803400c9acf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0e4de270d59927c1fd98dfbfca5bebec8750f72b7682863f1276e5cf4afe0e,PodSandboxId:5f62be7378940215f775ba016eaaba9e085a5bde8d5f3bd2af7af71b2a161ba1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733426396906111541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xr9mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2044800c-f517-439e-810b-71a114cb044e,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013c8063671c4aa3ba3a414d06a2537ce811bcd6e22e028d0ad8ab9af659022d,PodSandboxId:dc8d6361e49728eaa41e23a1d93aa34cfaa625af82fcfa2a884dd3b4f2b81c55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733426392
646389922,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw6nj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d35e1426-9151-4eb3-95fd-c2b36c126b51,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a639bf005af2020a5321599ccc56f99bd4c5be6aa0c227a6310955274ec60e3e,PodSandboxId:3cfec88984b8a0d72e94319ba62e7d4ab919d47ac556a084a2d6737ebd823e2e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173342638480
0708772,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94f9241c16c5e3fb852233a6fe3994b7,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73802addf28ef6b673245e1309d4d82c07c43374f514f1031e2a8277b4641e1a,PodSandboxId:594e9eb586b3236ea16c3700fc2cd0993924c9f7621e0cdde654b8062e9216ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733426381465280845,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44e395bdaa0336ddb64b019178e9d783,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7fcd5f7d56deb9c9698f0941fa3b61d597efc9495ed27488a425d6030baa44,PodSandboxId:c920b14cf50aa8ed9c35f9a67d873d3358f3e00a98649b822dcaf888ea4820e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733426381444138208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6cd909fedaf70356c0cea88a63589f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dec1697264029fa87be97fc70c56ce04eba1e67864a4b1b1f1e47cba052f7cf8,PodSandboxId:411118291d3f33b6d7f7a80f545d0dfdb0f0d3142d4ff4deb2a42c08e68de419,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733426381437294125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7aeab01bb9a2149eedec308e9c9b613,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c251344563e4644b942bcb793dd412b7fae15eefbb4142b68e4047db60a8fbeb,PodSandboxId:890699ae2c7d2cae9c6665fe590a645df186a046d832ec79a134309fabab3c04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733426381376403502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112c68d960b3bd38f8fac52ec570505b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e6a5cf9-76a3-4448-9202-8b69b8e75b67 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:25 ha-106302 crio[666]: time="2024-12-05 19:26:25.421462199Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e6be0c32-1be1-4e7c-a8f9-1366cb8cd3c8 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:26:25 ha-106302 crio[666]: time="2024-12-05 19:26:25.421652314Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e6be0c32-1be1-4e7c-a8f9-1366cb8cd3c8 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:26:25 ha-106302 crio[666]: time="2024-12-05 19:26:25.422869859Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d15c5656-c6c0-49a5-8cd2-499dad8c5eac name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:26:25 ha-106302 crio[666]: time="2024-12-05 19:26:25.423317336Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426785423295640,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d15c5656-c6c0-49a5-8cd2-499dad8c5eac name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:26:25 ha-106302 crio[666]: time="2024-12-05 19:26:25.424044670Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f690d44b-6343-4d17-816f-799a9c5cf72f name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:25 ha-106302 crio[666]: time="2024-12-05 19:26:25.424096448Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f690d44b-6343-4d17-816f-799a9c5cf72f name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:25 ha-106302 crio[666]: time="2024-12-05 19:26:25.424344773Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8175779cb574608a2e0e051ddf4963e3b0f7f7b3a0bb6082137a16800a03a08e,PodSandboxId:619925cbc39c69135172b7e76775b358b55fa47d57b5dfe0f03a5194c0692777,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733426557247240128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-p8z47,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16e14c1a-196d-42a8-b245-1a488cb9667f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7af42dff52cf31e3d0b4c5b3bb3039a69b066d99b6f46d065147ba29c75204b,PodSandboxId:95ad32628ed378cf8fe1c9cacc2bc59fc6969dc4a22ed2e11cbc6aa11f389771,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426409026160454,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sjsv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686cbc5-1b4f-44ea-89cb-70063b687718,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71878f2ac51cecfe539f367c2ff49f6bc6b40022a7dff189245bd007d0260d07,PodSandboxId:79783fce24db9824c8762aa0ebc246441d34d9d16f5b46829b9e44cac750e5b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426408724382293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-45m77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
88196078-5292-43dc-84b2-dc53af435e5c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a647561fc8a8150a221a7d9831dde01fe407024d413eda1a607ac294e573764b,PodSandboxId:ba65941872158b7f807f5608fbad458facee98a81f1ec1014ac383579eda3127,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733426408698615726,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d6e224-b304-4f84-a162-9803400c9acf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0e4de270d59927c1fd98dfbfca5bebec8750f72b7682863f1276e5cf4afe0e,PodSandboxId:5f62be7378940215f775ba016eaaba9e085a5bde8d5f3bd2af7af71b2a161ba1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733426396906111541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xr9mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2044800c-f517-439e-810b-71a114cb044e,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013c8063671c4aa3ba3a414d06a2537ce811bcd6e22e028d0ad8ab9af659022d,PodSandboxId:dc8d6361e49728eaa41e23a1d93aa34cfaa625af82fcfa2a884dd3b4f2b81c55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733426392
646389922,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw6nj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d35e1426-9151-4eb3-95fd-c2b36c126b51,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a639bf005af2020a5321599ccc56f99bd4c5be6aa0c227a6310955274ec60e3e,PodSandboxId:3cfec88984b8a0d72e94319ba62e7d4ab919d47ac556a084a2d6737ebd823e2e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173342638480
0708772,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94f9241c16c5e3fb852233a6fe3994b7,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73802addf28ef6b673245e1309d4d82c07c43374f514f1031e2a8277b4641e1a,PodSandboxId:594e9eb586b3236ea16c3700fc2cd0993924c9f7621e0cdde654b8062e9216ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733426381465280845,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44e395bdaa0336ddb64b019178e9d783,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7fcd5f7d56deb9c9698f0941fa3b61d597efc9495ed27488a425d6030baa44,PodSandboxId:c920b14cf50aa8ed9c35f9a67d873d3358f3e00a98649b822dcaf888ea4820e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733426381444138208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6cd909fedaf70356c0cea88a63589f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dec1697264029fa87be97fc70c56ce04eba1e67864a4b1b1f1e47cba052f7cf8,PodSandboxId:411118291d3f33b6d7f7a80f545d0dfdb0f0d3142d4ff4deb2a42c08e68de419,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733426381437294125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7aeab01bb9a2149eedec308e9c9b613,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c251344563e4644b942bcb793dd412b7fae15eefbb4142b68e4047db60a8fbeb,PodSandboxId:890699ae2c7d2cae9c6665fe590a645df186a046d832ec79a134309fabab3c04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733426381376403502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112c68d960b3bd38f8fac52ec570505b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f690d44b-6343-4d17-816f-799a9c5cf72f name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:25 ha-106302 crio[666]: time="2024-12-05 19:26:25.467436551Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b646bb3c-e855-4737-b632-2f866fcad154 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:26:25 ha-106302 crio[666]: time="2024-12-05 19:26:25.467612826Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b646bb3c-e855-4737-b632-2f866fcad154 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:26:25 ha-106302 crio[666]: time="2024-12-05 19:26:25.468696573Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=77db324c-f179-40a1-8007-f02f2d30aa8e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:26:25 ha-106302 crio[666]: time="2024-12-05 19:26:25.469135376Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426785469113378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=77db324c-f179-40a1-8007-f02f2d30aa8e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:26:25 ha-106302 crio[666]: time="2024-12-05 19:26:25.470275612Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d4fcc315-bdd1-4a56-8bbf-f19603f4549c name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:25 ha-106302 crio[666]: time="2024-12-05 19:26:25.470326188Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d4fcc315-bdd1-4a56-8bbf-f19603f4549c name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:25 ha-106302 crio[666]: time="2024-12-05 19:26:25.470655962Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8175779cb574608a2e0e051ddf4963e3b0f7f7b3a0bb6082137a16800a03a08e,PodSandboxId:619925cbc39c69135172b7e76775b358b55fa47d57b5dfe0f03a5194c0692777,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733426557247240128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-p8z47,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16e14c1a-196d-42a8-b245-1a488cb9667f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7af42dff52cf31e3d0b4c5b3bb3039a69b066d99b6f46d065147ba29c75204b,PodSandboxId:95ad32628ed378cf8fe1c9cacc2bc59fc6969dc4a22ed2e11cbc6aa11f389771,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426409026160454,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sjsv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686cbc5-1b4f-44ea-89cb-70063b687718,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71878f2ac51cecfe539f367c2ff49f6bc6b40022a7dff189245bd007d0260d07,PodSandboxId:79783fce24db9824c8762aa0ebc246441d34d9d16f5b46829b9e44cac750e5b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426408724382293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-45m77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
88196078-5292-43dc-84b2-dc53af435e5c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a647561fc8a8150a221a7d9831dde01fe407024d413eda1a607ac294e573764b,PodSandboxId:ba65941872158b7f807f5608fbad458facee98a81f1ec1014ac383579eda3127,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733426408698615726,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d6e224-b304-4f84-a162-9803400c9acf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0e4de270d59927c1fd98dfbfca5bebec8750f72b7682863f1276e5cf4afe0e,PodSandboxId:5f62be7378940215f775ba016eaaba9e085a5bde8d5f3bd2af7af71b2a161ba1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733426396906111541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xr9mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2044800c-f517-439e-810b-71a114cb044e,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013c8063671c4aa3ba3a414d06a2537ce811bcd6e22e028d0ad8ab9af659022d,PodSandboxId:dc8d6361e49728eaa41e23a1d93aa34cfaa625af82fcfa2a884dd3b4f2b81c55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733426392
646389922,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw6nj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d35e1426-9151-4eb3-95fd-c2b36c126b51,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a639bf005af2020a5321599ccc56f99bd4c5be6aa0c227a6310955274ec60e3e,PodSandboxId:3cfec88984b8a0d72e94319ba62e7d4ab919d47ac556a084a2d6737ebd823e2e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173342638480
0708772,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94f9241c16c5e3fb852233a6fe3994b7,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73802addf28ef6b673245e1309d4d82c07c43374f514f1031e2a8277b4641e1a,PodSandboxId:594e9eb586b3236ea16c3700fc2cd0993924c9f7621e0cdde654b8062e9216ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733426381465280845,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44e395bdaa0336ddb64b019178e9d783,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7fcd5f7d56deb9c9698f0941fa3b61d597efc9495ed27488a425d6030baa44,PodSandboxId:c920b14cf50aa8ed9c35f9a67d873d3358f3e00a98649b822dcaf888ea4820e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733426381444138208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6cd909fedaf70356c0cea88a63589f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dec1697264029fa87be97fc70c56ce04eba1e67864a4b1b1f1e47cba052f7cf8,PodSandboxId:411118291d3f33b6d7f7a80f545d0dfdb0f0d3142d4ff4deb2a42c08e68de419,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733426381437294125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7aeab01bb9a2149eedec308e9c9b613,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c251344563e4644b942bcb793dd412b7fae15eefbb4142b68e4047db60a8fbeb,PodSandboxId:890699ae2c7d2cae9c6665fe590a645df186a046d832ec79a134309fabab3c04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733426381376403502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112c68d960b3bd38f8fac52ec570505b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d4fcc315-bdd1-4a56-8bbf-f19603f4549c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8175779cb5746       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   619925cbc39c6       busybox-7dff88458-p8z47
	d7af42dff52cf       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   95ad32628ed37       coredns-7c65d6cfc9-sjsv2
	71878f2ac51ce       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   79783fce24db9       coredns-7c65d6cfc9-45m77
	a647561fc8a81       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   ba65941872158       storage-provisioner
	8e0e4de270d59       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16    6 minutes ago       Running             kindnet-cni               0                   5f62be7378940       kindnet-xr9mh
	013c8063671c4       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   dc8d6361e4972       kube-proxy-zw6nj
	a639bf005af20       ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e     6 minutes ago       Running             kube-vip                  0                   3cfec88984b8a       kube-vip-ha-106302
	73802addf28ef       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   594e9eb586b32       etcd-ha-106302
	8d7fcd5f7d56d       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   c920b14cf50aa       kube-apiserver-ha-106302
	dec1697264029       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   411118291d3f3       kube-scheduler-ha-106302
	c251344563e46       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   890699ae2c7d2       kube-controller-manager-ha-106302
	
	
	==> coredns [71878f2ac51cecfe539f367c2ff49f6bc6b40022a7dff189245bd007d0260d07] <==
	[INFO] 127.0.0.1:37176 - 32561 "HINFO IN 3495974066793148999.5277118907247610982. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022894865s
	[INFO] 10.244.1.2:51203 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.01735349s
	[INFO] 10.244.2.2:37733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000272502s
	[INFO] 10.244.2.2:53757 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001751263s
	[INFO] 10.244.2.2:54738 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000495007s
	[INFO] 10.244.0.4:45576 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000412263s
	[INFO] 10.244.0.4:48159 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000083837s
	[INFO] 10.244.1.2:34578 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000302061s
	[INFO] 10.244.1.2:54721 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000235254s
	[INFO] 10.244.1.2:43877 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000206178s
	[INFO] 10.244.1.2:35725 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00012413s
	[INFO] 10.244.2.2:53111 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00036507s
	[INFO] 10.244.2.2:60205 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00019223s
	[INFO] 10.244.2.2:49031 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000279282s
	[INFO] 10.244.1.2:48336 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000174589s
	[INFO] 10.244.1.2:47520 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000164259s
	[INFO] 10.244.1.2:58000 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119136s
	[INFO] 10.244.1.2:52602 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000196285s
	[INFO] 10.244.2.2:53065 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143333s
	[INFO] 10.244.0.4:50807 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119749s
	[INFO] 10.244.0.4:60692 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073699s
	[INFO] 10.244.1.2:46283 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000281341s
	[INFO] 10.244.1.2:51750 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000153725s
	[INFO] 10.244.2.2:33715 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000141245s
	[INFO] 10.244.0.4:40497 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000233306s
	
	
	==> coredns [d7af42dff52cf31e3d0b4c5b3bb3039a69b066d99b6f46d065147ba29c75204b] <==
	[INFO] 10.244.2.2:53827 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001485777s
	[INFO] 10.244.2.2:55594 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000308847s
	[INFO] 10.244.2.2:34459 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118477s
	[INFO] 10.244.2.2:39473 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062912s
	[INFO] 10.244.0.4:50797 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000084736s
	[INFO] 10.244.0.4:49715 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001903972s
	[INFO] 10.244.0.4:60150 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000344373s
	[INFO] 10.244.0.4:43238 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000075717s
	[INFO] 10.244.0.4:55133 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001508595s
	[INFO] 10.244.0.4:49161 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071435s
	[INFO] 10.244.0.4:34396 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000048471s
	[INFO] 10.244.0.4:40602 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000037032s
	[INFO] 10.244.2.2:46010 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013718s
	[INFO] 10.244.2.2:59322 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108224s
	[INFO] 10.244.2.2:38750 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000154868s
	[INFO] 10.244.0.4:43291 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123825s
	[INFO] 10.244.0.4:44515 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000163484s
	[INFO] 10.244.1.2:60479 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154514s
	[INFO] 10.244.1.2:42615 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000210654s
	[INFO] 10.244.2.2:57422 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132377s
	[INFO] 10.244.2.2:51037 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00039203s
	[INFO] 10.244.2.2:35850 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000148988s
	[INFO] 10.244.0.4:37661 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000206627s
	[INFO] 10.244.0.4:43810 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000129193s
	[INFO] 10.244.0.4:47355 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000145369s
	
	
	==> describe nodes <==
	Name:               ha-106302
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-106302
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=ha-106302
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T19_19_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 19:19:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-106302
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 19:26:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 19:22:51 +0000   Thu, 05 Dec 2024 19:19:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 19:22:51 +0000   Thu, 05 Dec 2024 19:19:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 19:22:51 +0000   Thu, 05 Dec 2024 19:19:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 19:22:51 +0000   Thu, 05 Dec 2024 19:20:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.185
	  Hostname:    ha-106302
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9fbfe8f29ea445c2a705d4735bab42d9
	  System UUID:                9fbfe8f2-9ea4-45c2-a705-d4735bab42d9
	  Boot ID:                    fbdd1078-6187-4d3e-90aa-6ba60d4d7163
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-p8z47              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 coredns-7c65d6cfc9-45m77             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m33s
	  kube-system                 coredns-7c65d6cfc9-sjsv2             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m33s
	  kube-system                 etcd-ha-106302                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m38s
	  kube-system                 kindnet-xr9mh                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m33s
	  kube-system                 kube-apiserver-ha-106302             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m38s
	  kube-system                 kube-controller-manager-ha-106302    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m38s
	  kube-system                 kube-proxy-zw6nj                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-scheduler-ha-106302             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m38s
	  kube-system                 kube-vip-ha-106302                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m40s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m32s  kube-proxy       
	  Normal  Starting                 6m38s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m38s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m38s  kubelet          Node ha-106302 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m38s  kubelet          Node ha-106302 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m38s  kubelet          Node ha-106302 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m34s  node-controller  Node ha-106302 event: Registered Node ha-106302 in Controller
	  Normal  NodeReady                6m17s  kubelet          Node ha-106302 status is now: NodeReady
	  Normal  RegisteredNode           5m29s  node-controller  Node ha-106302 event: Registered Node ha-106302 in Controller
	  Normal  RegisteredNode           4m14s  node-controller  Node ha-106302 event: Registered Node ha-106302 in Controller
	
	
	Name:               ha-106302-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-106302-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=ha-106302
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_05T19_20_50_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 19:20:47 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-106302-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 19:23:51 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 05 Dec 2024 19:22:50 +0000   Thu, 05 Dec 2024 19:24:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 05 Dec 2024 19:22:50 +0000   Thu, 05 Dec 2024 19:24:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 05 Dec 2024 19:22:50 +0000   Thu, 05 Dec 2024 19:24:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 05 Dec 2024 19:22:50 +0000   Thu, 05 Dec 2024 19:24:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.22
	  Hostname:    ha-106302-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3ca37a23968d4b139155a7b713c26828
	  System UUID:                3ca37a23-968d-4b13-9155-a7b713c26828
	  Boot ID:                    36db6c69-1ef9-45e9-8548-ed0c2d08168d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9kxtc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 etcd-ha-106302-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m36s
	  kube-system                 kindnet-thcsp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m38s
	  kube-system                 kube-apiserver-ha-106302-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m37s
	  kube-system                 kube-controller-manager-ha-106302-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-proxy-n57lf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m38s
	  kube-system                 kube-scheduler-ha-106302-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m37s
	  kube-system                 kube-vip-ha-106302-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m33s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m38s (x8 over 5m38s)  kubelet          Node ha-106302-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m38s (x8 over 5m38s)  kubelet          Node ha-106302-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m38s (x7 over 5m38s)  kubelet          Node ha-106302-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m34s                  node-controller  Node ha-106302-m02 event: Registered Node ha-106302-m02 in Controller
	  Normal  RegisteredNode           5m29s                  node-controller  Node ha-106302-m02 event: Registered Node ha-106302-m02 in Controller
	  Normal  RegisteredNode           4m14s                  node-controller  Node ha-106302-m02 event: Registered Node ha-106302-m02 in Controller
	  Normal  NodeNotReady             109s                   node-controller  Node ha-106302-m02 status is now: NodeNotReady
	
	
	Name:               ha-106302-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-106302-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=ha-106302
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_05T19_22_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 19:22:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-106302-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 19:26:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 19:23:03 +0000   Thu, 05 Dec 2024 19:22:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 19:23:03 +0000   Thu, 05 Dec 2024 19:22:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 19:23:03 +0000   Thu, 05 Dec 2024 19:22:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 19:23:03 +0000   Thu, 05 Dec 2024 19:22:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.151
	  Hostname:    ha-106302-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c79436ccca5a4dcb864b64b8f1638e64
	  System UUID:                c79436cc-ca5a-4dcb-864b-64b8f1638e64
	  Boot ID:                    c0d22d1e-5115-47a7-a1b2-4a76f9bfc0f7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9tp62                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 etcd-ha-106302-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m21s
	  kube-system                 kindnet-wdsv9                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m23s
	  kube-system                 kube-apiserver-ha-106302-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-controller-manager-ha-106302-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-proxy-pghdx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-scheduler-ha-106302-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-vip-ha-106302-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m19s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m23s (x8 over 4m23s)  kubelet          Node ha-106302-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x8 over 4m23s)  kubelet          Node ha-106302-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x7 over 4m23s)  kubelet          Node ha-106302-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-106302-m03 event: Registered Node ha-106302-m03 in Controller
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-106302-m03 event: Registered Node ha-106302-m03 in Controller
	  Normal  RegisteredNode           4m14s                  node-controller  Node ha-106302-m03 event: Registered Node ha-106302-m03 in Controller
	
	
	Name:               ha-106302-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-106302-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=ha-106302
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_05T19_23_10_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 19:23:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-106302-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 19:26:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 19:23:41 +0000   Thu, 05 Dec 2024 19:23:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 19:23:41 +0000   Thu, 05 Dec 2024 19:23:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 19:23:41 +0000   Thu, 05 Dec 2024 19:23:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 19:23:41 +0000   Thu, 05 Dec 2024 19:23:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.7
	  Hostname:    ha-106302-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 230adc0a6a8a4784a2711e0f05c0dc5c
	  System UUID:                230adc0a-6a8a-4784-a271-1e0f05c0dc5c
	  Boot ID:                    c550c7a6-b9cf-4484-890e-5c6b9b697be6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4x5qd       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m15s
	  kube-system                 kube-proxy-2dvtn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m10s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  3m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     3m15s                  cidrAllocator    Node ha-106302-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  3m15s (x2 over 3m16s)  kubelet          Node ha-106302-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m15s (x2 over 3m16s)  kubelet          Node ha-106302-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m15s (x2 over 3m16s)  kubelet          Node ha-106302-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m14s                  node-controller  Node ha-106302-m04 event: Registered Node ha-106302-m04 in Controller
	  Normal  RegisteredNode           3m14s                  node-controller  Node ha-106302-m04 event: Registered Node ha-106302-m04 in Controller
	  Normal  RegisteredNode           3m14s                  node-controller  Node ha-106302-m04 event: Registered Node ha-106302-m04 in Controller
	  Normal  NodeReady                2m54s                  kubelet          Node ha-106302-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 5 19:19] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052678] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040068] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.967635] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.737822] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.642469] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.132933] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.059010] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077817] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.173461] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.135588] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.266467] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +4.207512] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[  +3.975007] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.063464] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.124511] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.093371] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.093366] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.201097] kauditd_printk_skb: 34 callbacks suppressed
	[Dec 5 19:20] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [73802addf28ef6b673245e1309d4d82c07c43374f514f1031e2a8277b4641e1a] <==
	{"level":"warn","ts":"2024-12-05T19:26:25.758201Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:25.765699Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:25.772885Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:25.775648Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:25.784753Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:25.793662Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:25.799471Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:25.800331Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:25.804056Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:25.853739Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:25.865292Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:25.867756Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:25.873914Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:25.880131Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:25.885116Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:25.888714Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:25.894617Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:25.900613Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:25.907268Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:25.911637Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:25.914824Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:25.918259Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:25.925076Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:25.932016Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:25.965588Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:26:26 up 7 min,  0 users,  load average: 0.43, 0.31, 0.15
	Linux ha-106302 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8e0e4de270d59927c1fd98dfbfca5bebec8750f72b7682863f1276e5cf4afe0e] <==
	I1205 19:25:48.038691       1 main.go:324] Node ha-106302-m04 has CIDR [10.244.3.0/24] 
	I1205 19:25:58.032212       1 main.go:297] Handling node with IPs: map[192.168.39.185:{}]
	I1205 19:25:58.032349       1 main.go:301] handling current node
	I1205 19:25:58.032381       1 main.go:297] Handling node with IPs: map[192.168.39.22:{}]
	I1205 19:25:58.032409       1 main.go:324] Node ha-106302-m02 has CIDR [10.244.1.0/24] 
	I1205 19:25:58.032728       1 main.go:297] Handling node with IPs: map[192.168.39.151:{}]
	I1205 19:25:58.032781       1 main.go:324] Node ha-106302-m03 has CIDR [10.244.2.0/24] 
	I1205 19:25:58.032936       1 main.go:297] Handling node with IPs: map[192.168.39.7:{}]
	I1205 19:25:58.032961       1 main.go:324] Node ha-106302-m04 has CIDR [10.244.3.0/24] 
	I1205 19:26:08.033900       1 main.go:297] Handling node with IPs: map[192.168.39.185:{}]
	I1205 19:26:08.033997       1 main.go:301] handling current node
	I1205 19:26:08.034040       1 main.go:297] Handling node with IPs: map[192.168.39.22:{}]
	I1205 19:26:08.034061       1 main.go:324] Node ha-106302-m02 has CIDR [10.244.1.0/24] 
	I1205 19:26:08.034788       1 main.go:297] Handling node with IPs: map[192.168.39.151:{}]
	I1205 19:26:08.034868       1 main.go:324] Node ha-106302-m03 has CIDR [10.244.2.0/24] 
	I1205 19:26:08.035323       1 main.go:297] Handling node with IPs: map[192.168.39.7:{}]
	I1205 19:26:08.036186       1 main.go:324] Node ha-106302-m04 has CIDR [10.244.3.0/24] 
	I1205 19:26:18.031621       1 main.go:297] Handling node with IPs: map[192.168.39.185:{}]
	I1205 19:26:18.031663       1 main.go:301] handling current node
	I1205 19:26:18.031679       1 main.go:297] Handling node with IPs: map[192.168.39.22:{}]
	I1205 19:26:18.031683       1 main.go:324] Node ha-106302-m02 has CIDR [10.244.1.0/24] 
	I1205 19:26:18.031927       1 main.go:297] Handling node with IPs: map[192.168.39.151:{}]
	I1205 19:26:18.031962       1 main.go:324] Node ha-106302-m03 has CIDR [10.244.2.0/24] 
	I1205 19:26:18.032073       1 main.go:297] Handling node with IPs: map[192.168.39.7:{}]
	I1205 19:26:18.032101       1 main.go:324] Node ha-106302-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8d7fcd5f7d56deb9c9698f0941fa3b61d597efc9495ed27488a425d6030baa44] <==
	W1205 19:19:46.101456       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.185]
	I1205 19:19:46.102689       1 controller.go:615] quota admission added evaluator for: endpoints
	I1205 19:19:46.107444       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 19:19:46.330379       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1205 19:19:47.696704       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1205 19:19:47.715088       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1205 19:19:47.729079       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1205 19:19:52.034082       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1205 19:19:52.100936       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1205 19:22:38.001032       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32830: use of closed network connection
	E1205 19:22:38.204236       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32840: use of closed network connection
	E1205 19:22:38.401399       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32852: use of closed network connection
	E1205 19:22:38.650810       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32868: use of closed network connection
	E1205 19:22:38.848239       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32882: use of closed network connection
	E1205 19:22:39.039033       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32892: use of closed network connection
	E1205 19:22:39.233185       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32904: use of closed network connection
	E1205 19:22:39.423024       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32930: use of closed network connection
	E1205 19:22:39.623335       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32946: use of closed network connection
	E1205 19:22:39.929919       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32972: use of closed network connection
	E1205 19:22:40.109732       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32994: use of closed network connection
	E1205 19:22:40.313792       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33004: use of closed network connection
	E1205 19:22:40.512273       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33032: use of closed network connection
	E1205 19:22:40.696838       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33064: use of closed network connection
	E1205 19:22:40.891466       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33092: use of closed network connection
	W1205 19:23:56.103047       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.151 192.168.39.185]
	
	
	==> kube-controller-manager [c251344563e4644b942bcb793dd412b7fae15eefbb4142b68e4047db60a8fbeb] <==
	I1205 19:22:37.515258       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="61.952µs"
	I1205 19:22:50.027185       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m02"
	I1205 19:22:51.994933       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302"
	I1205 19:23:03.348987       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m03"
	I1205 19:23:10.074709       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-106302-m04\" does not exist"
	I1205 19:23:10.130455       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-106302-m04" podCIDRs=["10.244.3.0/24"]
	I1205 19:23:10.130559       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:10.130592       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:10.405830       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:10.799985       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:11.200921       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-106302-m04"
	I1205 19:23:11.286372       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:20.510971       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:31.164993       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:31.165813       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-106302-m04"
	I1205 19:23:31.181172       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:31.224422       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:41.047269       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:24:36.318018       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m02"
	I1205 19:24:36.318367       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-106302-m04"
	I1205 19:24:36.348027       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m02"
	I1205 19:24:36.462551       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="17.68033ms"
	I1205 19:24:36.463140       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="102.944µs"
	I1205 19:24:36.509355       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m02"
	I1205 19:24:41.525728       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m02"
	
	
	==> kube-proxy [013c8063671c4aa3ba3a414d06a2537ce811bcd6e22e028d0ad8ab9af659022d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 19:19:53.137314       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1205 19:19:53.171420       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.185"]
	E1205 19:19:53.171824       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 19:19:53.214655       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 19:19:53.214741       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 19:19:53.214788       1 server_linux.go:169] "Using iptables Proxier"
	I1205 19:19:53.217916       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 19:19:53.218705       1 server.go:483] "Version info" version="v1.31.2"
	I1205 19:19:53.218777       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 19:19:53.220962       1 config.go:199] "Starting service config controller"
	I1205 19:19:53.221650       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 19:19:53.221992       1 config.go:105] "Starting endpoint slice config controller"
	I1205 19:19:53.222064       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 19:19:53.223609       1 config.go:328] "Starting node config controller"
	I1205 19:19:53.226006       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 19:19:53.322722       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 19:19:53.322841       1 shared_informer.go:320] Caches are synced for service config
	I1205 19:19:53.326785       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [dec1697264029fa87be97fc70c56ce04eba1e67864a4b1b1f1e47cba052f7cf8] <==
	W1205 19:19:45.698374       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 19:19:45.698482       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 19:19:45.740149       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 19:19:45.740541       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1205 19:19:48.195246       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1205 19:22:02.375222       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tpm2m\": pod kube-proxy-tpm2m is already assigned to node \"ha-106302-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tpm2m" node="ha-106302-m03"
	E1205 19:22:02.375416       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 1976f453-f240-48ff-bcac-37351800ac58(kube-system/kube-proxy-tpm2m) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-tpm2m"
	E1205 19:22:02.375449       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tpm2m\": pod kube-proxy-tpm2m is already assigned to node \"ha-106302-m03\"" pod="kube-system/kube-proxy-tpm2m"
	I1205 19:22:02.375580       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-tpm2m" node="ha-106302-m03"
	E1205 19:22:02.382616       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-wdsv9\": pod kindnet-wdsv9 is already assigned to node \"ha-106302-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-wdsv9" node="ha-106302-m03"
	E1205 19:22:02.382763       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 83d82f5d-42c3-47be-af20-41b82c16b114(kube-system/kindnet-wdsv9) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-wdsv9"
	E1205 19:22:02.382784       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-wdsv9\": pod kindnet-wdsv9 is already assigned to node \"ha-106302-m03\"" pod="kube-system/kindnet-wdsv9"
	I1205 19:22:02.382811       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-wdsv9" node="ha-106302-m03"
	E1205 19:22:02.429049       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-pghdx\": pod kube-proxy-pghdx is already assigned to node \"ha-106302-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-pghdx" node="ha-106302-m03"
	E1205 19:22:02.429116       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 915060a3-353c-4a2c-a9d6-494206776446(kube-system/kube-proxy-pghdx) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-pghdx"
	E1205 19:22:02.429132       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-pghdx\": pod kube-proxy-pghdx is already assigned to node \"ha-106302-m03\"" pod="kube-system/kube-proxy-pghdx"
	I1205 19:22:02.429156       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-pghdx" node="ha-106302-m03"
	E1205 19:22:32.450165       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-p8z47\": pod busybox-7dff88458-p8z47 is already assigned to node \"ha-106302\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-p8z47" node="ha-106302"
	E1205 19:22:32.450464       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 16e14c1a-196d-42a8-b245-1a488cb9667f(default/busybox-7dff88458-p8z47) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-p8z47"
	E1205 19:22:32.450610       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-p8z47\": pod busybox-7dff88458-p8z47 is already assigned to node \"ha-106302\"" pod="default/busybox-7dff88458-p8z47"
	I1205 19:22:32.450729       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-p8z47" node="ha-106302"
	E1205 19:22:32.450776       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-9tp62\": pod busybox-7dff88458-9tp62 is already assigned to node \"ha-106302-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-9tp62" node="ha-106302-m03"
	E1205 19:22:32.459571       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod afb0c778-acb1-4db0-b0b6-f054049d0a9d(default/busybox-7dff88458-9tp62) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-9tp62"
	E1205 19:22:32.460188       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-9tp62\": pod busybox-7dff88458-9tp62 is already assigned to node \"ha-106302-m03\"" pod="default/busybox-7dff88458-9tp62"
	I1205 19:22:32.460282       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-9tp62" node="ha-106302-m03"
	
	
	==> kubelet <==
	Dec 05 19:24:47 ha-106302 kubelet[1308]: E1205 19:24:47.778614    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426687778175124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:24:47 ha-106302 kubelet[1308]: E1205 19:24:47.778767    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426687778175124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:24:57 ha-106302 kubelet[1308]: E1205 19:24:57.781563    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426697781244346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:24:57 ha-106302 kubelet[1308]: E1205 19:24:57.781621    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426697781244346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:07 ha-106302 kubelet[1308]: E1205 19:25:07.783663    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426707783267296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:07 ha-106302 kubelet[1308]: E1205 19:25:07.783686    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426707783267296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:17 ha-106302 kubelet[1308]: E1205 19:25:17.787301    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426717786088822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:17 ha-106302 kubelet[1308]: E1205 19:25:17.788092    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426717786088822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:27 ha-106302 kubelet[1308]: E1205 19:25:27.791254    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426727789306197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:27 ha-106302 kubelet[1308]: E1205 19:25:27.792185    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426727789306197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:37 ha-106302 kubelet[1308]: E1205 19:25:37.793643    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426737793262536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:37 ha-106302 kubelet[1308]: E1205 19:25:37.793688    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426737793262536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:47 ha-106302 kubelet[1308]: E1205 19:25:47.685793    1308 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 05 19:25:47 ha-106302 kubelet[1308]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 05 19:25:47 ha-106302 kubelet[1308]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 19:25:47 ha-106302 kubelet[1308]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 19:25:47 ha-106302 kubelet[1308]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 19:25:47 ha-106302 kubelet[1308]: E1205 19:25:47.795235    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426747794906816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:47 ha-106302 kubelet[1308]: E1205 19:25:47.795258    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426747794906816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:57 ha-106302 kubelet[1308]: E1205 19:25:57.797302    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426757796435936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:57 ha-106302 kubelet[1308]: E1205 19:25:57.798201    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426757796435936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:26:07 ha-106302 kubelet[1308]: E1205 19:26:07.800104    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426767799828720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:26:07 ha-106302 kubelet[1308]: E1205 19:26:07.800714    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426767799828720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:26:17 ha-106302 kubelet[1308]: E1205 19:26:17.806169    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426777803286232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:26:17 ha-106302 kubelet[1308]: E1205 19:26:17.806235    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426777803286232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-106302 -n ha-106302
helpers_test.go:261: (dbg) Run:  kubectl --context ha-106302 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.98268747s)
ha_test.go:309: expected profile "ha-106302" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-106302\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-106302\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\
"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-106302\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.185\",\"Port\":8443,\"Kuberne
tesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.22\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.151\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.7\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":
false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"M
ountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-106302 -n ha-106302
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-106302 logs -n 25: (1.514584281s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                      |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m03:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302:/home/docker/cp-test_ha-106302-m03_ha-106302.txt                     |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302 sudo cat                                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m03_ha-106302.txt                               |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m03:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m02:/home/docker/cp-test_ha-106302-m03_ha-106302-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302-m02 sudo cat                                        | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m03_ha-106302-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m03:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04:/home/docker/cp-test_ha-106302-m03_ha-106302-m04.txt             |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302-m04 sudo cat                                        | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m03_ha-106302-m04.txt                           |           |         |         |                     |                     |
	| cp      | ha-106302 cp testdata/cp-test.txt                                              | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04:/home/docker/cp-test.txt                                         |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile42720673/001/cp-test_ha-106302-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302:/home/docker/cp-test_ha-106302-m04_ha-106302.txt                     |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302 sudo cat                                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m04_ha-106302.txt                               |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m02:/home/docker/cp-test_ha-106302-m04_ha-106302-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302-m02 sudo cat                                        | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m04_ha-106302-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m03:/home/docker/cp-test_ha-106302-m04_ha-106302-m03.txt             |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302-m03 sudo cat                                        | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m04_ha-106302-m03.txt                           |           |         |         |                     |                     |
	| node    | ha-106302 node stop m02 -v=7                                                   | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | ha-106302 node start m02 -v=7                                                  | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:26 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 19:19:05
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:19:05.666020  549077 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:19:05.666172  549077 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:19:05.666182  549077 out.go:358] Setting ErrFile to fd 2...
	I1205 19:19:05.666187  549077 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:19:05.666372  549077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 19:19:05.666982  549077 out.go:352] Setting JSON to false
	I1205 19:19:05.667993  549077 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":7292,"bootTime":1733419054,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:19:05.668118  549077 start.go:139] virtualization: kvm guest
	I1205 19:19:05.670258  549077 out.go:177] * [ha-106302] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:19:05.672244  549077 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 19:19:05.672310  549077 notify.go:220] Checking for updates...
	I1205 19:19:05.674836  549077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:19:05.676311  549077 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:19:05.677586  549077 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:19:05.678906  549077 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:19:05.680179  549077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:19:05.681501  549077 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 19:19:05.716520  549077 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 19:19:05.718361  549077 start.go:297] selected driver: kvm2
	I1205 19:19:05.718375  549077 start.go:901] validating driver "kvm2" against <nil>
	I1205 19:19:05.718387  549077 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:19:05.719138  549077 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:19:05.719217  549077 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20052-530897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 19:19:05.734721  549077 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 19:19:05.734777  549077 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 19:19:05.735145  549077 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:19:05.735198  549077 cni.go:84] Creating CNI manager for ""
	I1205 19:19:05.735258  549077 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1205 19:19:05.735271  549077 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 19:19:05.735352  549077 start.go:340] cluster config:
	{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1205 19:19:05.735498  549077 iso.go:125] acquiring lock: {Name:mk778929df466edaca8cb6d38427acedfae32b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:19:05.737389  549077 out.go:177] * Starting "ha-106302" primary control-plane node in "ha-106302" cluster
	I1205 19:19:05.738520  549077 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:19:05.738565  549077 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 19:19:05.738579  549077 cache.go:56] Caching tarball of preloaded images
	I1205 19:19:05.738663  549077 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 19:19:05.738678  549077 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 19:19:05.739034  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:19:05.739058  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json: {Name:mk36f887968924e3b867abb3b152df7882583b36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:05.739210  549077 start.go:360] acquireMachinesLock for ha-106302: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 19:19:05.739241  549077 start.go:364] duration metric: took 16.973µs to acquireMachinesLock for "ha-106302"
	I1205 19:19:05.739258  549077 start.go:93] Provisioning new machine with config: &{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:19:05.739311  549077 start.go:125] createHost starting for "" (driver="kvm2")
	I1205 19:19:05.740876  549077 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 19:19:05.741018  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:19:05.741056  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:19:05.755320  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35555
	I1205 19:19:05.755768  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:19:05.756364  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:19:05.756386  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:19:05.756720  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:19:05.756918  549077 main.go:141] libmachine: (ha-106302) Calling .GetMachineName
	I1205 19:19:05.757058  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:05.757247  549077 start.go:159] libmachine.API.Create for "ha-106302" (driver="kvm2")
	I1205 19:19:05.757287  549077 client.go:168] LocalClient.Create starting
	I1205 19:19:05.757338  549077 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem
	I1205 19:19:05.757377  549077 main.go:141] libmachine: Decoding PEM data...
	I1205 19:19:05.757396  549077 main.go:141] libmachine: Parsing certificate...
	I1205 19:19:05.757476  549077 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem
	I1205 19:19:05.757503  549077 main.go:141] libmachine: Decoding PEM data...
	I1205 19:19:05.757522  549077 main.go:141] libmachine: Parsing certificate...
	I1205 19:19:05.757549  549077 main.go:141] libmachine: Running pre-create checks...
	I1205 19:19:05.757567  549077 main.go:141] libmachine: (ha-106302) Calling .PreCreateCheck
	I1205 19:19:05.757886  549077 main.go:141] libmachine: (ha-106302) Calling .GetConfigRaw
	I1205 19:19:05.758310  549077 main.go:141] libmachine: Creating machine...
	I1205 19:19:05.758325  549077 main.go:141] libmachine: (ha-106302) Calling .Create
	I1205 19:19:05.758443  549077 main.go:141] libmachine: (ha-106302) Creating KVM machine...
	I1205 19:19:05.759563  549077 main.go:141] libmachine: (ha-106302) DBG | found existing default KVM network
	I1205 19:19:05.760292  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:05.760130  549100 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231f0}
	I1205 19:19:05.760373  549077 main.go:141] libmachine: (ha-106302) DBG | created network xml: 
	I1205 19:19:05.760394  549077 main.go:141] libmachine: (ha-106302) DBG | <network>
	I1205 19:19:05.760405  549077 main.go:141] libmachine: (ha-106302) DBG |   <name>mk-ha-106302</name>
	I1205 19:19:05.760417  549077 main.go:141] libmachine: (ha-106302) DBG |   <dns enable='no'/>
	I1205 19:19:05.760428  549077 main.go:141] libmachine: (ha-106302) DBG |   
	I1205 19:19:05.760437  549077 main.go:141] libmachine: (ha-106302) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1205 19:19:05.760450  549077 main.go:141] libmachine: (ha-106302) DBG |     <dhcp>
	I1205 19:19:05.760460  549077 main.go:141] libmachine: (ha-106302) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1205 19:19:05.760472  549077 main.go:141] libmachine: (ha-106302) DBG |     </dhcp>
	I1205 19:19:05.760488  549077 main.go:141] libmachine: (ha-106302) DBG |   </ip>
	I1205 19:19:05.760499  549077 main.go:141] libmachine: (ha-106302) DBG |   
	I1205 19:19:05.760507  549077 main.go:141] libmachine: (ha-106302) DBG | </network>
	I1205 19:19:05.760517  549077 main.go:141] libmachine: (ha-106302) DBG | 
	I1205 19:19:05.765547  549077 main.go:141] libmachine: (ha-106302) DBG | trying to create private KVM network mk-ha-106302 192.168.39.0/24...
	I1205 19:19:05.832912  549077 main.go:141] libmachine: (ha-106302) DBG | private KVM network mk-ha-106302 192.168.39.0/24 created
	I1205 19:19:05.832950  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:05.832854  549100 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:19:05.832976  549077 main.go:141] libmachine: (ha-106302) Setting up store path in /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302 ...
	I1205 19:19:05.832995  549077 main.go:141] libmachine: (ha-106302) Building disk image from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 19:19:05.833015  549077 main.go:141] libmachine: (ha-106302) Downloading /home/jenkins/minikube-integration/20052-530897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 19:19:06.116114  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:06.115928  549100 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa...
	I1205 19:19:06.195132  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:06.194945  549100 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/ha-106302.rawdisk...
	I1205 19:19:06.195166  549077 main.go:141] libmachine: (ha-106302) DBG | Writing magic tar header
	I1205 19:19:06.195176  549077 main.go:141] libmachine: (ha-106302) DBG | Writing SSH key tar header
	I1205 19:19:06.195183  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:06.195098  549100 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302 ...
	I1205 19:19:06.195194  549077 main.go:141] libmachine: (ha-106302) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302
	I1205 19:19:06.195272  549077 main.go:141] libmachine: (ha-106302) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302 (perms=drwx------)
	I1205 19:19:06.195294  549077 main.go:141] libmachine: (ha-106302) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines (perms=drwxr-xr-x)
	I1205 19:19:06.195305  549077 main.go:141] libmachine: (ha-106302) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines
	I1205 19:19:06.195321  549077 main.go:141] libmachine: (ha-106302) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:19:06.195332  549077 main.go:141] libmachine: (ha-106302) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube (perms=drwxr-xr-x)
	I1205 19:19:06.195340  549077 main.go:141] libmachine: (ha-106302) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897
	I1205 19:19:06.195349  549077 main.go:141] libmachine: (ha-106302) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 19:19:06.195354  549077 main.go:141] libmachine: (ha-106302) DBG | Checking permissions on dir: /home/jenkins
	I1205 19:19:06.195360  549077 main.go:141] libmachine: (ha-106302) DBG | Checking permissions on dir: /home
	I1205 19:19:06.195379  549077 main.go:141] libmachine: (ha-106302) DBG | Skipping /home - not owner
	I1205 19:19:06.195390  549077 main.go:141] libmachine: (ha-106302) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897 (perms=drwxrwxr-x)
	I1205 19:19:06.195397  549077 main.go:141] libmachine: (ha-106302) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 19:19:06.195403  549077 main.go:141] libmachine: (ha-106302) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 19:19:06.195409  549077 main.go:141] libmachine: (ha-106302) Creating domain...
	I1205 19:19:06.196529  549077 main.go:141] libmachine: (ha-106302) define libvirt domain using xml: 
	I1205 19:19:06.196544  549077 main.go:141] libmachine: (ha-106302) <domain type='kvm'>
	I1205 19:19:06.196550  549077 main.go:141] libmachine: (ha-106302)   <name>ha-106302</name>
	I1205 19:19:06.196561  549077 main.go:141] libmachine: (ha-106302)   <memory unit='MiB'>2200</memory>
	I1205 19:19:06.196569  549077 main.go:141] libmachine: (ha-106302)   <vcpu>2</vcpu>
	I1205 19:19:06.196578  549077 main.go:141] libmachine: (ha-106302)   <features>
	I1205 19:19:06.196586  549077 main.go:141] libmachine: (ha-106302)     <acpi/>
	I1205 19:19:06.196595  549077 main.go:141] libmachine: (ha-106302)     <apic/>
	I1205 19:19:06.196603  549077 main.go:141] libmachine: (ha-106302)     <pae/>
	I1205 19:19:06.196621  549077 main.go:141] libmachine: (ha-106302)     
	I1205 19:19:06.196632  549077 main.go:141] libmachine: (ha-106302)   </features>
	I1205 19:19:06.196643  549077 main.go:141] libmachine: (ha-106302)   <cpu mode='host-passthrough'>
	I1205 19:19:06.196652  549077 main.go:141] libmachine: (ha-106302)   
	I1205 19:19:06.196658  549077 main.go:141] libmachine: (ha-106302)   </cpu>
	I1205 19:19:06.196670  549077 main.go:141] libmachine: (ha-106302)   <os>
	I1205 19:19:06.196677  549077 main.go:141] libmachine: (ha-106302)     <type>hvm</type>
	I1205 19:19:06.196689  549077 main.go:141] libmachine: (ha-106302)     <boot dev='cdrom'/>
	I1205 19:19:06.196704  549077 main.go:141] libmachine: (ha-106302)     <boot dev='hd'/>
	I1205 19:19:06.196715  549077 main.go:141] libmachine: (ha-106302)     <bootmenu enable='no'/>
	I1205 19:19:06.196724  549077 main.go:141] libmachine: (ha-106302)   </os>
	I1205 19:19:06.196732  549077 main.go:141] libmachine: (ha-106302)   <devices>
	I1205 19:19:06.196743  549077 main.go:141] libmachine: (ha-106302)     <disk type='file' device='cdrom'>
	I1205 19:19:06.196758  549077 main.go:141] libmachine: (ha-106302)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/boot2docker.iso'/>
	I1205 19:19:06.196769  549077 main.go:141] libmachine: (ha-106302)       <target dev='hdc' bus='scsi'/>
	I1205 19:19:06.196777  549077 main.go:141] libmachine: (ha-106302)       <readonly/>
	I1205 19:19:06.196783  549077 main.go:141] libmachine: (ha-106302)     </disk>
	I1205 19:19:06.196795  549077 main.go:141] libmachine: (ha-106302)     <disk type='file' device='disk'>
	I1205 19:19:06.196806  549077 main.go:141] libmachine: (ha-106302)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 19:19:06.196821  549077 main.go:141] libmachine: (ha-106302)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/ha-106302.rawdisk'/>
	I1205 19:19:06.196833  549077 main.go:141] libmachine: (ha-106302)       <target dev='hda' bus='virtio'/>
	I1205 19:19:06.196842  549077 main.go:141] libmachine: (ha-106302)     </disk>
	I1205 19:19:06.196851  549077 main.go:141] libmachine: (ha-106302)     <interface type='network'>
	I1205 19:19:06.196861  549077 main.go:141] libmachine: (ha-106302)       <source network='mk-ha-106302'/>
	I1205 19:19:06.196873  549077 main.go:141] libmachine: (ha-106302)       <model type='virtio'/>
	I1205 19:19:06.196896  549077 main.go:141] libmachine: (ha-106302)     </interface>
	I1205 19:19:06.196909  549077 main.go:141] libmachine: (ha-106302)     <interface type='network'>
	I1205 19:19:06.196919  549077 main.go:141] libmachine: (ha-106302)       <source network='default'/>
	I1205 19:19:06.196927  549077 main.go:141] libmachine: (ha-106302)       <model type='virtio'/>
	I1205 19:19:06.196936  549077 main.go:141] libmachine: (ha-106302)     </interface>
	I1205 19:19:06.196944  549077 main.go:141] libmachine: (ha-106302)     <serial type='pty'>
	I1205 19:19:06.196953  549077 main.go:141] libmachine: (ha-106302)       <target port='0'/>
	I1205 19:19:06.196962  549077 main.go:141] libmachine: (ha-106302)     </serial>
	I1205 19:19:06.196975  549077 main.go:141] libmachine: (ha-106302)     <console type='pty'>
	I1205 19:19:06.196984  549077 main.go:141] libmachine: (ha-106302)       <target type='serial' port='0'/>
	I1205 19:19:06.196996  549077 main.go:141] libmachine: (ha-106302)     </console>
	I1205 19:19:06.197007  549077 main.go:141] libmachine: (ha-106302)     <rng model='virtio'>
	I1205 19:19:06.197017  549077 main.go:141] libmachine: (ha-106302)       <backend model='random'>/dev/random</backend>
	I1205 19:19:06.197028  549077 main.go:141] libmachine: (ha-106302)     </rng>
	I1205 19:19:06.197036  549077 main.go:141] libmachine: (ha-106302)     
	I1205 19:19:06.197055  549077 main.go:141] libmachine: (ha-106302)     
	I1205 19:19:06.197068  549077 main.go:141] libmachine: (ha-106302)   </devices>
	I1205 19:19:06.197073  549077 main.go:141] libmachine: (ha-106302) </domain>
	I1205 19:19:06.197078  549077 main.go:141] libmachine: (ha-106302) 
	I1205 19:19:06.202279  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:71:9c:4d in network default
	I1205 19:19:06.203034  549077 main.go:141] libmachine: (ha-106302) Ensuring networks are active...
	I1205 19:19:06.203055  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:06.203739  549077 main.go:141] libmachine: (ha-106302) Ensuring network default is active
	I1205 19:19:06.204123  549077 main.go:141] libmachine: (ha-106302) Ensuring network mk-ha-106302 is active
	I1205 19:19:06.204705  549077 main.go:141] libmachine: (ha-106302) Getting domain xml...
	I1205 19:19:06.205494  549077 main.go:141] libmachine: (ha-106302) Creating domain...
	I1205 19:19:07.414905  549077 main.go:141] libmachine: (ha-106302) Waiting to get IP...
	I1205 19:19:07.415701  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:07.416131  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:07.416172  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:07.416110  549100 retry.go:31] will retry after 254.984492ms: waiting for machine to come up
	I1205 19:19:07.672644  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:07.673096  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:07.673126  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:07.673025  549100 retry.go:31] will retry after 337.308268ms: waiting for machine to come up
	I1205 19:19:08.011677  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:08.012131  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:08.012153  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:08.012097  549100 retry.go:31] will retry after 331.381496ms: waiting for machine to come up
	I1205 19:19:08.344830  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:08.345286  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:08.345315  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:08.345230  549100 retry.go:31] will retry after 526.921251ms: waiting for machine to come up
	I1205 19:19:08.874020  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:08.874426  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:08.874457  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:08.874366  549100 retry.go:31] will retry after 677.76743ms: waiting for machine to come up
	I1205 19:19:09.554490  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:09.555045  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:09.555078  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:09.554953  549100 retry.go:31] will retry after 810.208397ms: waiting for machine to come up
	I1205 19:19:10.367000  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:10.367429  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:10.367463  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:10.367397  549100 retry.go:31] will retry after 1.115748222s: waiting for machine to come up
	I1205 19:19:11.484531  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:11.485067  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:11.485098  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:11.485008  549100 retry.go:31] will retry after 1.3235703s: waiting for machine to come up
	I1205 19:19:12.810602  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:12.810991  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:12.811014  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:12.810945  549100 retry.go:31] will retry after 1.831554324s: waiting for machine to come up
	I1205 19:19:14.645035  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:14.645488  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:14.645513  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:14.645439  549100 retry.go:31] will retry after 1.712987373s: waiting for machine to come up
	I1205 19:19:16.360441  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:16.361053  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:16.361095  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:16.360964  549100 retry.go:31] will retry after 1.757836043s: waiting for machine to come up
	I1205 19:19:18.120905  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:18.121462  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:18.121490  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:18.121398  549100 retry.go:31] will retry after 2.555295546s: waiting for machine to come up
	I1205 19:19:20.680255  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:20.680831  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:20.680857  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:20.680783  549100 retry.go:31] will retry after 3.433196303s: waiting for machine to come up
	I1205 19:19:24.117782  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:24.118200  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find current IP address of domain ha-106302 in network mk-ha-106302
	I1205 19:19:24.118225  549077 main.go:141] libmachine: (ha-106302) DBG | I1205 19:19:24.118165  549100 retry.go:31] will retry after 5.333530854s: waiting for machine to come up
	I1205 19:19:29.456371  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.456820  549077 main.go:141] libmachine: (ha-106302) Found IP for machine: 192.168.39.185
	I1205 19:19:29.456837  549077 main.go:141] libmachine: (ha-106302) Reserving static IP address...
	I1205 19:19:29.456845  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has current primary IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.457259  549077 main.go:141] libmachine: (ha-106302) DBG | unable to find host DHCP lease matching {name: "ha-106302", mac: "52:54:00:3b:e4:76", ip: "192.168.39.185"} in network mk-ha-106302
	I1205 19:19:29.532847  549077 main.go:141] libmachine: (ha-106302) DBG | Getting to WaitForSSH function...
	I1205 19:19:29.532882  549077 main.go:141] libmachine: (ha-106302) Reserved static IP address: 192.168.39.185
	I1205 19:19:29.532895  549077 main.go:141] libmachine: (ha-106302) Waiting for SSH to be available...
	I1205 19:19:29.535405  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.536081  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:29.536388  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.536771  549077 main.go:141] libmachine: (ha-106302) DBG | Using SSH client type: external
	I1205 19:19:29.536915  549077 main.go:141] libmachine: (ha-106302) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa (-rw-------)
	I1205 19:19:29.536944  549077 main.go:141] libmachine: (ha-106302) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.185 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 19:19:29.536962  549077 main.go:141] libmachine: (ha-106302) DBG | About to run SSH command:
	I1205 19:19:29.536972  549077 main.go:141] libmachine: (ha-106302) DBG | exit 0
	I1205 19:19:29.664869  549077 main.go:141] libmachine: (ha-106302) DBG | SSH cmd err, output: <nil>: 
	I1205 19:19:29.665141  549077 main.go:141] libmachine: (ha-106302) KVM machine creation complete!
	I1205 19:19:29.665477  549077 main.go:141] libmachine: (ha-106302) Calling .GetConfigRaw
	I1205 19:19:29.666068  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:29.666255  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:29.666420  549077 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 19:19:29.666438  549077 main.go:141] libmachine: (ha-106302) Calling .GetState
	I1205 19:19:29.667703  549077 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 19:19:29.667716  549077 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 19:19:29.667721  549077 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 19:19:29.667726  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:29.669895  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.670221  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:29.670248  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.670353  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:29.670530  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:29.670706  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:29.670840  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:29.671003  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:19:29.671220  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:19:29.671232  549077 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 19:19:29.779777  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:19:29.779805  549077 main.go:141] libmachine: Detecting the provisioner...
	I1205 19:19:29.779833  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:29.782799  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.783132  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:29.783166  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.783331  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:29.783547  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:29.783683  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:29.783825  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:29.783999  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:19:29.784181  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:19:29.784191  549077 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 19:19:29.893268  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 19:19:29.893371  549077 main.go:141] libmachine: found compatible host: buildroot
	I1205 19:19:29.893381  549077 main.go:141] libmachine: Provisioning with buildroot...
	I1205 19:19:29.893390  549077 main.go:141] libmachine: (ha-106302) Calling .GetMachineName
	I1205 19:19:29.893630  549077 buildroot.go:166] provisioning hostname "ha-106302"
	I1205 19:19:29.893659  549077 main.go:141] libmachine: (ha-106302) Calling .GetMachineName
	I1205 19:19:29.893862  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:29.896175  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.896531  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:29.896559  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:29.896683  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:29.896874  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:29.897035  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:29.897188  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:29.897357  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:19:29.897522  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:19:29.897537  549077 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-106302 && echo "ha-106302" | sudo tee /etc/hostname
	I1205 19:19:30.019869  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-106302
	
	I1205 19:19:30.019903  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.022773  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.023137  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.023166  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.023330  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:30.023501  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.023684  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.023794  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:30.023973  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:19:30.024192  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:19:30.024213  549077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-106302' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-106302/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-106302' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:19:30.142377  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:19:30.142414  549077 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 19:19:30.142464  549077 buildroot.go:174] setting up certificates
	I1205 19:19:30.142480  549077 provision.go:84] configureAuth start
	I1205 19:19:30.142498  549077 main.go:141] libmachine: (ha-106302) Calling .GetMachineName
	I1205 19:19:30.142814  549077 main.go:141] libmachine: (ha-106302) Calling .GetIP
	I1205 19:19:30.145608  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.145944  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.145976  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.146132  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.148289  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.148544  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.148570  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.148679  549077 provision.go:143] copyHostCerts
	I1205 19:19:30.148727  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:19:30.148761  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 19:19:30.148778  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:19:30.148862  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 19:19:30.148936  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:19:30.148954  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 19:19:30.148960  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:19:30.148984  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 19:19:30.149037  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:19:30.149054  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 19:19:30.149058  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:19:30.149079  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 19:19:30.149123  549077 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.ha-106302 san=[127.0.0.1 192.168.39.185 ha-106302 localhost minikube]
	I1205 19:19:30.203242  549077 provision.go:177] copyRemoteCerts
	I1205 19:19:30.203307  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:19:30.203333  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.206290  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.206588  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.206621  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.206770  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:30.206956  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.207107  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:30.207262  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:19:30.291637  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 19:19:30.291726  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 19:19:30.316534  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 19:19:30.316648  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1205 19:19:30.340941  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 19:19:30.341027  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 19:19:30.365151  549077 provision.go:87] duration metric: took 222.64958ms to configureAuth
	I1205 19:19:30.365205  549077 buildroot.go:189] setting minikube options for container-runtime
	I1205 19:19:30.365380  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:19:30.365454  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.367820  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.368297  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.368331  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.368517  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:30.368750  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.368925  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.369063  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:30.369263  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:19:30.369448  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:19:30.369470  549077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:19:30.602742  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:19:30.602781  549077 main.go:141] libmachine: Checking connection to Docker...
	I1205 19:19:30.602812  549077 main.go:141] libmachine: (ha-106302) Calling .GetURL
	I1205 19:19:30.604203  549077 main.go:141] libmachine: (ha-106302) DBG | Using libvirt version 6000000
	I1205 19:19:30.606408  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.606761  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.606783  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.606936  549077 main.go:141] libmachine: Docker is up and running!
	I1205 19:19:30.606953  549077 main.go:141] libmachine: Reticulating splines...
	I1205 19:19:30.606980  549077 client.go:171] duration metric: took 24.849681626s to LocalClient.Create
	I1205 19:19:30.607004  549077 start.go:167] duration metric: took 24.849757772s to libmachine.API.Create "ha-106302"
	I1205 19:19:30.607018  549077 start.go:293] postStartSetup for "ha-106302" (driver="kvm2")
	I1205 19:19:30.607027  549077 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:19:30.607063  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:30.607325  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:19:30.607353  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.609392  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.609687  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.609717  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.609857  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:30.610024  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.610186  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:30.610314  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:19:30.696960  549077 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:19:30.708057  549077 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 19:19:30.708089  549077 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 19:19:30.708159  549077 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 19:19:30.708255  549077 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 19:19:30.708293  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /etc/ssl/certs/5381862.pem
	I1205 19:19:30.708421  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 19:19:30.723671  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:19:30.750926  549077 start.go:296] duration metric: took 143.887881ms for postStartSetup
	I1205 19:19:30.750995  549077 main.go:141] libmachine: (ha-106302) Calling .GetConfigRaw
	I1205 19:19:30.751793  549077 main.go:141] libmachine: (ha-106302) Calling .GetIP
	I1205 19:19:30.754292  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.754719  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.754767  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.755073  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:19:30.755274  549077 start.go:128] duration metric: took 25.015949989s to createHost
	I1205 19:19:30.755307  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.757830  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.758211  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.758247  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.758373  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:30.758576  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.758728  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.758849  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:30.759003  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:19:30.759199  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:19:30.759225  549077 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 19:19:30.869236  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733426370.835143064
	
	I1205 19:19:30.869266  549077 fix.go:216] guest clock: 1733426370.835143064
	I1205 19:19:30.869276  549077 fix.go:229] Guest: 2024-12-05 19:19:30.835143064 +0000 UTC Remote: 2024-12-05 19:19:30.755292155 +0000 UTC m=+25.129028552 (delta=79.850909ms)
	I1205 19:19:30.869342  549077 fix.go:200] guest clock delta is within tolerance: 79.850909ms
	I1205 19:19:30.869354  549077 start.go:83] releasing machines lock for "ha-106302", held for 25.130102669s
	I1205 19:19:30.869396  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:30.869701  549077 main.go:141] libmachine: (ha-106302) Calling .GetIP
	I1205 19:19:30.872169  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.872505  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.872550  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.872651  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:30.873195  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:30.873371  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:30.873461  549077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:19:30.873500  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.873622  549077 ssh_runner.go:195] Run: cat /version.json
	I1205 19:19:30.873648  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:30.876112  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.876348  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.876515  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.876544  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.876694  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:30.876787  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:30.876829  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:30.876854  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.876974  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:30.877063  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:30.877155  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:30.877225  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:19:30.877286  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:30.877416  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:19:30.978260  549077 ssh_runner.go:195] Run: systemctl --version
	I1205 19:19:30.984523  549077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:19:31.144577  549077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 19:19:31.150862  549077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 19:19:31.150921  549077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:19:31.168518  549077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 19:19:31.168546  549077 start.go:495] detecting cgroup driver to use...
	I1205 19:19:31.168607  549077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:19:31.184398  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:19:31.198391  549077 docker.go:217] disabling cri-docker service (if available) ...
	I1205 19:19:31.198459  549077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:19:31.212374  549077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:19:31.227092  549077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:19:31.345190  549077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:19:31.498651  549077 docker.go:233] disabling docker service ...
	I1205 19:19:31.498756  549077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:19:31.514013  549077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:19:31.527698  549077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:19:31.668291  549077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:19:31.787293  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:19:31.802121  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:19:31.821416  549077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 19:19:31.821488  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:19:31.831922  549077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:19:31.832002  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:19:31.842263  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:19:31.852580  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:19:31.863167  549077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:19:31.873525  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:19:31.883966  549077 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:19:31.901444  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:19:31.913185  549077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:19:31.922739  549077 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 19:19:31.922847  549077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 19:19:31.935394  549077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:19:31.944801  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:19:32.062619  549077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:19:32.155496  549077 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:19:32.155575  549077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:19:32.161325  549077 start.go:563] Will wait 60s for crictl version
	I1205 19:19:32.161401  549077 ssh_runner.go:195] Run: which crictl
	I1205 19:19:32.165363  549077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:19:32.206408  549077 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 19:19:32.206526  549077 ssh_runner.go:195] Run: crio --version
	I1205 19:19:32.236278  549077 ssh_runner.go:195] Run: crio --version
	I1205 19:19:32.267603  549077 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 19:19:32.269318  549077 main.go:141] libmachine: (ha-106302) Calling .GetIP
	I1205 19:19:32.272307  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:32.272654  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:32.272680  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:32.272875  549077 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 19:19:32.277254  549077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:19:32.290866  549077 kubeadm.go:883] updating cluster {Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 19:19:32.290982  549077 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:19:32.291025  549077 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:19:32.327363  549077 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 19:19:32.327433  549077 ssh_runner.go:195] Run: which lz4
	I1205 19:19:32.331533  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1205 19:19:32.331639  549077 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 19:19:32.335872  549077 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 19:19:32.335904  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 19:19:33.796243  549077 crio.go:462] duration metric: took 1.464622041s to copy over tarball
	I1205 19:19:33.796360  549077 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 19:19:35.904137  549077 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.107740538s)
	I1205 19:19:35.904177  549077 crio.go:469] duration metric: took 2.107873128s to extract the tarball
	I1205 19:19:35.904188  549077 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 19:19:35.941468  549077 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:19:35.985079  549077 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 19:19:35.985107  549077 cache_images.go:84] Images are preloaded, skipping loading
	I1205 19:19:35.985116  549077 kubeadm.go:934] updating node { 192.168.39.185 8443 v1.31.2 crio true true} ...
	I1205 19:19:35.985222  549077 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-106302 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 19:19:35.985289  549077 ssh_runner.go:195] Run: crio config
	I1205 19:19:36.034780  549077 cni.go:84] Creating CNI manager for ""
	I1205 19:19:36.034806  549077 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1205 19:19:36.034818  549077 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 19:19:36.034841  549077 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-106302 NodeName:ha-106302 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 19:19:36.035004  549077 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-106302"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.185"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 19:19:36.035032  549077 kube-vip.go:115] generating kube-vip config ...
	I1205 19:19:36.035097  549077 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 19:19:36.051693  549077 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 19:19:36.051834  549077 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1205 19:19:36.051903  549077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 19:19:36.062174  549077 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 19:19:36.062270  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1205 19:19:36.072102  549077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1205 19:19:36.089037  549077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 19:19:36.105710  549077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1205 19:19:36.122352  549077 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1205 19:19:36.139382  549077 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 19:19:36.143400  549077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:19:36.156091  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:19:36.264660  549077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:19:36.281414  549077 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302 for IP: 192.168.39.185
	I1205 19:19:36.281442  549077 certs.go:194] generating shared ca certs ...
	I1205 19:19:36.281458  549077 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:36.281638  549077 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 19:19:36.281689  549077 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 19:19:36.281704  549077 certs.go:256] generating profile certs ...
	I1205 19:19:36.281767  549077 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key
	I1205 19:19:36.281786  549077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.crt with IP's: []
	I1205 19:19:36.500418  549077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.crt ...
	I1205 19:19:36.500457  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.crt: {Name:mkb14e7bfcf7e74b43ed78fd0539344fe783f416 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:36.500681  549077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key ...
	I1205 19:19:36.500700  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key: {Name:mk7e0330a0f2228d88e0f9d58264fe1f08349563 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:36.500831  549077 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.ab85f0da
	I1205 19:19:36.500858  549077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.ab85f0da with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.185 192.168.39.254]
	I1205 19:19:36.595145  549077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.ab85f0da ...
	I1205 19:19:36.595178  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.ab85f0da: {Name:mk6fe31beb668f4be09d7ef716f12b627681f889 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:36.595356  549077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.ab85f0da ...
	I1205 19:19:36.595368  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.ab85f0da: {Name:mkb2102bd03507fee93efd6f4ad4d01650f6960d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:36.595451  549077 certs.go:381] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.ab85f0da -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt
	I1205 19:19:36.595530  549077 certs.go:385] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.ab85f0da -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key
	I1205 19:19:36.595588  549077 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key
	I1205 19:19:36.595600  549077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt with IP's: []
	I1205 19:19:36.750498  549077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt ...
	I1205 19:19:36.750528  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt: {Name:mk310719ddd3b7c13526e0d5963ab5146ba62c75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:36.750689  549077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key ...
	I1205 19:19:36.750700  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key: {Name:mka21d6cd95f23029a85e314b05925420c5b8d35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:36.750768  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 19:19:36.750785  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 19:19:36.750796  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 19:19:36.750809  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 19:19:36.750819  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 19:19:36.750831  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 19:19:36.750841  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 19:19:36.750856  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 19:19:36.750907  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 19:19:36.750946  549077 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 19:19:36.750968  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 19:19:36.750995  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 19:19:36.751018  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:19:36.751046  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 19:19:36.751085  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:19:36.751157  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem -> /usr/share/ca-certificates/538186.pem
	I1205 19:19:36.751182  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /usr/share/ca-certificates/5381862.pem
	I1205 19:19:36.751197  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:19:36.751757  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:19:36.777283  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 19:19:36.800796  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:19:36.824188  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 19:19:36.847922  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 19:19:36.871853  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 19:19:36.897433  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:19:36.923449  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 19:19:36.949838  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 19:19:36.975187  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 19:19:36.999764  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:19:37.024507  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 19:19:37.044052  549077 ssh_runner.go:195] Run: openssl version
	I1205 19:19:37.052297  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 19:19:37.068345  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 19:19:37.073536  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 19:19:37.073603  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 19:19:37.080035  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 19:19:37.091136  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 19:19:37.115623  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 19:19:37.120621  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 19:19:37.120687  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 19:19:37.126618  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 19:19:37.138669  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:19:37.150853  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:19:37.155803  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:19:37.155881  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:19:37.162049  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:19:37.174819  549077 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 19:19:37.179494  549077 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 19:19:37.179570  549077 kubeadm.go:392] StartCluster: {Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:19:37.179688  549077 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 19:19:37.179745  549077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 19:19:37.223116  549077 cri.go:89] found id: ""
	I1205 19:19:37.223191  549077 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 19:19:37.234706  549077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 19:19:37.247347  549077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 19:19:37.259258  549077 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 19:19:37.259287  549077 kubeadm.go:157] found existing configuration files:
	
	I1205 19:19:37.259336  549077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 19:19:37.269699  549077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 19:19:37.269766  549077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 19:19:37.280566  549077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 19:19:37.290999  549077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 19:19:37.291070  549077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 19:19:37.302967  549077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 19:19:37.313065  549077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 19:19:37.313160  549077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 19:19:37.323523  549077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 19:19:37.333224  549077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 19:19:37.333286  549077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 19:19:37.343725  549077 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 19:19:37.465425  549077 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 19:19:37.465503  549077 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 19:19:37.563680  549077 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 19:19:37.563837  549077 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 19:19:37.563944  549077 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 19:19:37.577125  549077 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 19:19:37.767794  549077 out.go:235]   - Generating certificates and keys ...
	I1205 19:19:37.767998  549077 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 19:19:37.768133  549077 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 19:19:37.768233  549077 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 19:19:37.823275  549077 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1205 19:19:38.256538  549077 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1205 19:19:38.418481  549077 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1205 19:19:38.506453  549077 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1205 19:19:38.506612  549077 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-106302 localhost] and IPs [192.168.39.185 127.0.0.1 ::1]
	I1205 19:19:38.599268  549077 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1205 19:19:38.599504  549077 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-106302 localhost] and IPs [192.168.39.185 127.0.0.1 ::1]
	I1205 19:19:38.721006  549077 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 19:19:38.801347  549077 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 19:19:39.020781  549077 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1205 19:19:39.020849  549077 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 19:19:39.351214  549077 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 19:19:39.652426  549077 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 19:19:39.852747  549077 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 19:19:39.949305  549077 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 19:19:40.093193  549077 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 19:19:40.093754  549077 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 19:19:40.099424  549077 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 19:19:40.101578  549077 out.go:235]   - Booting up control plane ...
	I1205 19:19:40.101681  549077 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 19:19:40.101747  549077 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 19:19:40.101808  549077 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 19:19:40.118245  549077 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 19:19:40.124419  549077 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 19:19:40.124472  549077 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 19:19:40.264350  549077 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 19:19:40.264527  549077 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 19:19:40.767072  549077 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.104658ms
	I1205 19:19:40.767195  549077 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 19:19:46.889839  549077 kubeadm.go:310] [api-check] The API server is healthy after 6.126522028s
	I1205 19:19:46.903949  549077 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 19:19:46.920566  549077 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 19:19:46.959559  549077 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 19:19:46.959762  549077 kubeadm.go:310] [mark-control-plane] Marking the node ha-106302 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 19:19:46.972882  549077 kubeadm.go:310] [bootstrap-token] Using token: hftusq.bke4u9rqswjxk9ui
	I1205 19:19:46.974672  549077 out.go:235]   - Configuring RBAC rules ...
	I1205 19:19:46.974836  549077 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 19:19:46.983462  549077 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 19:19:46.993184  549077 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 19:19:47.001254  549077 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 19:19:47.006556  549077 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 19:19:47.012815  549077 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 19:19:47.297618  549077 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 19:19:47.737983  549077 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 19:19:48.297207  549077 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 19:19:48.298256  549077 kubeadm.go:310] 
	I1205 19:19:48.298332  549077 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 19:19:48.298344  549077 kubeadm.go:310] 
	I1205 19:19:48.298499  549077 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 19:19:48.298523  549077 kubeadm.go:310] 
	I1205 19:19:48.298551  549077 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 19:19:48.298654  549077 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 19:19:48.298730  549077 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 19:19:48.298740  549077 kubeadm.go:310] 
	I1205 19:19:48.298818  549077 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 19:19:48.298835  549077 kubeadm.go:310] 
	I1205 19:19:48.298894  549077 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 19:19:48.298903  549077 kubeadm.go:310] 
	I1205 19:19:48.298967  549077 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 19:19:48.299056  549077 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 19:19:48.299139  549077 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 19:19:48.299148  549077 kubeadm.go:310] 
	I1205 19:19:48.299267  549077 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 19:19:48.299368  549077 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 19:19:48.299380  549077 kubeadm.go:310] 
	I1205 19:19:48.299496  549077 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hftusq.bke4u9rqswjxk9ui \
	I1205 19:19:48.299623  549077 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 \
	I1205 19:19:48.299658  549077 kubeadm.go:310] 	--control-plane 
	I1205 19:19:48.299667  549077 kubeadm.go:310] 
	I1205 19:19:48.299787  549077 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 19:19:48.299797  549077 kubeadm.go:310] 
	I1205 19:19:48.299896  549077 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hftusq.bke4u9rqswjxk9ui \
	I1205 19:19:48.300017  549077 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 
	I1205 19:19:48.300978  549077 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 19:19:48.301019  549077 cni.go:84] Creating CNI manager for ""
	I1205 19:19:48.301039  549077 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1205 19:19:48.302992  549077 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1205 19:19:48.304422  549077 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 19:19:48.310158  549077 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1205 19:19:48.310179  549077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1205 19:19:48.330305  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 19:19:48.708578  549077 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 19:19:48.708692  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:48.708697  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-106302 minikube.k8s.io/updated_at=2024_12_05T19_19_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=ha-106302 minikube.k8s.io/primary=true
	I1205 19:19:48.766673  549077 ops.go:34] apiserver oom_adj: -16
	I1205 19:19:48.946725  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:49.447511  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:49.947827  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:50.447219  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:50.947321  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:51.447070  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:51.946846  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:19:52.030950  549077 kubeadm.go:1113] duration metric: took 3.322332375s to wait for elevateKubeSystemPrivileges
	I1205 19:19:52.030984  549077 kubeadm.go:394] duration metric: took 14.851420641s to StartCluster
	I1205 19:19:52.031005  549077 settings.go:142] acquiring lock: {Name:mk53b9e6d652790a330d8f10370186624dd74692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:52.031096  549077 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:19:52.032088  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:19:52.032382  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 19:19:52.032390  549077 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:19:52.032418  549077 start.go:241] waiting for startup goroutines ...
	I1205 19:19:52.032436  549077 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 19:19:52.032529  549077 addons.go:69] Setting storage-provisioner=true in profile "ha-106302"
	I1205 19:19:52.032562  549077 addons.go:234] Setting addon storage-provisioner=true in "ha-106302"
	I1205 19:19:52.032575  549077 addons.go:69] Setting default-storageclass=true in profile "ha-106302"
	I1205 19:19:52.032596  549077 host.go:66] Checking if "ha-106302" exists ...
	I1205 19:19:52.032603  549077 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-106302"
	I1205 19:19:52.032616  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:19:52.032974  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:19:52.033012  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:19:52.033080  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:19:52.033128  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:19:52.048867  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37355
	I1205 19:19:52.048932  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39985
	I1205 19:19:52.049474  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:19:52.049598  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:19:52.050083  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:19:52.050108  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:19:52.050196  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:19:52.050217  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:19:52.050494  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:19:52.050547  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:19:52.050740  549077 main.go:141] libmachine: (ha-106302) Calling .GetState
	I1205 19:19:52.051108  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:19:52.051156  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:19:52.053000  549077 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:19:52.053380  549077 kapi.go:59] client config for ha-106302: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.crt", KeyFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key", CAFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 19:19:52.053986  549077 cert_rotation.go:140] Starting client certificate rotation controller
	I1205 19:19:52.054434  549077 addons.go:234] Setting addon default-storageclass=true in "ha-106302"
	I1205 19:19:52.054485  549077 host.go:66] Checking if "ha-106302" exists ...
	I1205 19:19:52.054871  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:19:52.054924  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:19:52.068403  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42463
	I1205 19:19:52.069056  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:19:52.069816  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:19:52.069851  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:19:52.070279  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:19:52.070500  549077 main.go:141] libmachine: (ha-106302) Calling .GetState
	I1205 19:19:52.071258  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35709
	I1205 19:19:52.071775  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:19:52.072386  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:19:52.072414  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:19:52.072576  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:52.072784  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:19:52.073435  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:19:52.073491  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:19:52.074239  549077 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 19:19:52.075532  549077 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:19:52.075550  549077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 19:19:52.075581  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:52.079231  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:52.079693  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:52.079729  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:52.080048  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:52.080297  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:52.080464  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:52.080625  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:19:52.090582  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41111
	I1205 19:19:52.091077  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:19:52.091649  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:19:52.091690  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:19:52.092023  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:19:52.092235  549077 main.go:141] libmachine: (ha-106302) Calling .GetState
	I1205 19:19:52.093928  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:19:52.094164  549077 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 19:19:52.094184  549077 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 19:19:52.094204  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:19:52.097425  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:52.097952  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:19:52.097988  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:19:52.098172  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:19:52.098357  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:19:52.098547  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:19:52.098690  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:19:52.240649  549077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:19:52.260476  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 19:19:52.326335  549077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 19:19:53.107266  549077 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1205 19:19:53.107380  549077 main.go:141] libmachine: Making call to close driver server
	I1205 19:19:53.107404  549077 main.go:141] libmachine: Making call to close driver server
	I1205 19:19:53.107428  549077 main.go:141] libmachine: (ha-106302) Calling .Close
	I1205 19:19:53.107411  549077 main.go:141] libmachine: (ha-106302) Calling .Close
	I1205 19:19:53.107855  549077 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:19:53.107863  549077 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:19:53.107872  549077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:19:53.107875  549077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:19:53.107881  549077 main.go:141] libmachine: Making call to close driver server
	I1205 19:19:53.107889  549077 main.go:141] libmachine: (ha-106302) Calling .Close
	I1205 19:19:53.107898  549077 main.go:141] libmachine: Making call to close driver server
	I1205 19:19:53.107909  549077 main.go:141] libmachine: (ha-106302) Calling .Close
	I1205 19:19:53.108388  549077 main.go:141] libmachine: (ha-106302) DBG | Closing plugin on server side
	I1205 19:19:53.108430  549077 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:19:53.108447  549077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:19:53.108523  549077 main.go:141] libmachine: (ha-106302) DBG | Closing plugin on server side
	I1205 19:19:53.108536  549077 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 19:19:53.108552  549077 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 19:19:53.108666  549077 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1205 19:19:53.108672  549077 round_trippers.go:469] Request Headers:
	I1205 19:19:53.108683  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:19:53.108690  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:19:53.108977  549077 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:19:53.109004  549077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:19:53.122784  549077 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1205 19:19:53.123463  549077 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1205 19:19:53.123481  549077 round_trippers.go:469] Request Headers:
	I1205 19:19:53.123489  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:19:53.123494  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:19:53.123497  549077 round_trippers.go:473]     Content-Type: application/json
	I1205 19:19:53.127870  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:19:53.128387  549077 main.go:141] libmachine: Making call to close driver server
	I1205 19:19:53.128421  549077 main.go:141] libmachine: (ha-106302) Calling .Close
	I1205 19:19:53.128753  549077 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:19:53.128782  549077 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:19:53.130618  549077 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1205 19:19:53.131922  549077 addons.go:510] duration metric: took 1.09949066s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1205 19:19:53.131966  549077 start.go:246] waiting for cluster config update ...
	I1205 19:19:53.131976  549077 start.go:255] writing updated cluster config ...
	I1205 19:19:53.133784  549077 out.go:201] 
	I1205 19:19:53.135291  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:19:53.135384  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:19:53.137100  549077 out.go:177] * Starting "ha-106302-m02" control-plane node in "ha-106302" cluster
	I1205 19:19:53.138489  549077 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:19:53.138517  549077 cache.go:56] Caching tarball of preloaded images
	I1205 19:19:53.138635  549077 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 19:19:53.138649  549077 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 19:19:53.138720  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:19:53.138982  549077 start.go:360] acquireMachinesLock for ha-106302-m02: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 19:19:53.139025  549077 start.go:364] duration metric: took 23.765µs to acquireMachinesLock for "ha-106302-m02"
	I1205 19:19:53.139048  549077 start.go:93] Provisioning new machine with config: &{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:19:53.139118  549077 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1205 19:19:53.140509  549077 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 19:19:53.140599  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:19:53.140636  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:19:53.156622  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38951
	I1205 19:19:53.157158  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:19:53.157623  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:19:53.157649  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:19:53.157947  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:19:53.158168  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetMachineName
	I1205 19:19:53.158323  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:19:53.158520  549077 start.go:159] libmachine.API.Create for "ha-106302" (driver="kvm2")
	I1205 19:19:53.158562  549077 client.go:168] LocalClient.Create starting
	I1205 19:19:53.158607  549077 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem
	I1205 19:19:53.158656  549077 main.go:141] libmachine: Decoding PEM data...
	I1205 19:19:53.158704  549077 main.go:141] libmachine: Parsing certificate...
	I1205 19:19:53.158778  549077 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem
	I1205 19:19:53.158809  549077 main.go:141] libmachine: Decoding PEM data...
	I1205 19:19:53.158825  549077 main.go:141] libmachine: Parsing certificate...
	I1205 19:19:53.158852  549077 main.go:141] libmachine: Running pre-create checks...
	I1205 19:19:53.158863  549077 main.go:141] libmachine: (ha-106302-m02) Calling .PreCreateCheck
	I1205 19:19:53.159044  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetConfigRaw
	I1205 19:19:53.159562  549077 main.go:141] libmachine: Creating machine...
	I1205 19:19:53.159580  549077 main.go:141] libmachine: (ha-106302-m02) Calling .Create
	I1205 19:19:53.159720  549077 main.go:141] libmachine: (ha-106302-m02) Creating KVM machine...
	I1205 19:19:53.161306  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found existing default KVM network
	I1205 19:19:53.161451  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found existing private KVM network mk-ha-106302
	I1205 19:19:53.161677  549077 main.go:141] libmachine: (ha-106302-m02) Setting up store path in /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02 ...
	I1205 19:19:53.161706  549077 main.go:141] libmachine: (ha-106302-m02) Building disk image from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 19:19:53.161792  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:53.161686  549462 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:19:53.161946  549077 main.go:141] libmachine: (ha-106302-m02) Downloading /home/jenkins/minikube-integration/20052-530897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 19:19:53.454907  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:53.454778  549462 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa...
	I1205 19:19:53.629727  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:53.629571  549462 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/ha-106302-m02.rawdisk...
	I1205 19:19:53.629774  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Writing magic tar header
	I1205 19:19:53.629794  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Writing SSH key tar header
	I1205 19:19:53.629802  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:53.629693  549462 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02 ...
	I1205 19:19:53.629813  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02
	I1205 19:19:53.629877  549077 main.go:141] libmachine: (ha-106302-m02) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02 (perms=drwx------)
	I1205 19:19:53.629901  549077 main.go:141] libmachine: (ha-106302-m02) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines (perms=drwxr-xr-x)
	I1205 19:19:53.629937  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines
	I1205 19:19:53.629971  549077 main.go:141] libmachine: (ha-106302-m02) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube (perms=drwxr-xr-x)
	I1205 19:19:53.629982  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:19:53.629997  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897
	I1205 19:19:53.630005  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 19:19:53.630016  549077 main.go:141] libmachine: (ha-106302-m02) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897 (perms=drwxrwxr-x)
	I1205 19:19:53.630032  549077 main.go:141] libmachine: (ha-106302-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 19:19:53.630058  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Checking permissions on dir: /home/jenkins
	I1205 19:19:53.630069  549077 main.go:141] libmachine: (ha-106302-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 19:19:53.630084  549077 main.go:141] libmachine: (ha-106302-m02) Creating domain...
	I1205 19:19:53.630098  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Checking permissions on dir: /home
	I1205 19:19:53.630111  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Skipping /home - not owner
	I1205 19:19:53.630931  549077 main.go:141] libmachine: (ha-106302-m02) define libvirt domain using xml: 
	I1205 19:19:53.630951  549077 main.go:141] libmachine: (ha-106302-m02) <domain type='kvm'>
	I1205 19:19:53.630961  549077 main.go:141] libmachine: (ha-106302-m02)   <name>ha-106302-m02</name>
	I1205 19:19:53.630968  549077 main.go:141] libmachine: (ha-106302-m02)   <memory unit='MiB'>2200</memory>
	I1205 19:19:53.630977  549077 main.go:141] libmachine: (ha-106302-m02)   <vcpu>2</vcpu>
	I1205 19:19:53.630984  549077 main.go:141] libmachine: (ha-106302-m02)   <features>
	I1205 19:19:53.630994  549077 main.go:141] libmachine: (ha-106302-m02)     <acpi/>
	I1205 19:19:53.630998  549077 main.go:141] libmachine: (ha-106302-m02)     <apic/>
	I1205 19:19:53.631006  549077 main.go:141] libmachine: (ha-106302-m02)     <pae/>
	I1205 19:19:53.631010  549077 main.go:141] libmachine: (ha-106302-m02)     
	I1205 19:19:53.631018  549077 main.go:141] libmachine: (ha-106302-m02)   </features>
	I1205 19:19:53.631023  549077 main.go:141] libmachine: (ha-106302-m02)   <cpu mode='host-passthrough'>
	I1205 19:19:53.631031  549077 main.go:141] libmachine: (ha-106302-m02)   
	I1205 19:19:53.631048  549077 main.go:141] libmachine: (ha-106302-m02)   </cpu>
	I1205 19:19:53.631078  549077 main.go:141] libmachine: (ha-106302-m02)   <os>
	I1205 19:19:53.631098  549077 main.go:141] libmachine: (ha-106302-m02)     <type>hvm</type>
	I1205 19:19:53.631107  549077 main.go:141] libmachine: (ha-106302-m02)     <boot dev='cdrom'/>
	I1205 19:19:53.631116  549077 main.go:141] libmachine: (ha-106302-m02)     <boot dev='hd'/>
	I1205 19:19:53.631124  549077 main.go:141] libmachine: (ha-106302-m02)     <bootmenu enable='no'/>
	I1205 19:19:53.631134  549077 main.go:141] libmachine: (ha-106302-m02)   </os>
	I1205 19:19:53.631143  549077 main.go:141] libmachine: (ha-106302-m02)   <devices>
	I1205 19:19:53.631154  549077 main.go:141] libmachine: (ha-106302-m02)     <disk type='file' device='cdrom'>
	I1205 19:19:53.631183  549077 main.go:141] libmachine: (ha-106302-m02)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/boot2docker.iso'/>
	I1205 19:19:53.631194  549077 main.go:141] libmachine: (ha-106302-m02)       <target dev='hdc' bus='scsi'/>
	I1205 19:19:53.631203  549077 main.go:141] libmachine: (ha-106302-m02)       <readonly/>
	I1205 19:19:53.631212  549077 main.go:141] libmachine: (ha-106302-m02)     </disk>
	I1205 19:19:53.631221  549077 main.go:141] libmachine: (ha-106302-m02)     <disk type='file' device='disk'>
	I1205 19:19:53.631237  549077 main.go:141] libmachine: (ha-106302-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 19:19:53.631252  549077 main.go:141] libmachine: (ha-106302-m02)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/ha-106302-m02.rawdisk'/>
	I1205 19:19:53.631263  549077 main.go:141] libmachine: (ha-106302-m02)       <target dev='hda' bus='virtio'/>
	I1205 19:19:53.631274  549077 main.go:141] libmachine: (ha-106302-m02)     </disk>
	I1205 19:19:53.631284  549077 main.go:141] libmachine: (ha-106302-m02)     <interface type='network'>
	I1205 19:19:53.631293  549077 main.go:141] libmachine: (ha-106302-m02)       <source network='mk-ha-106302'/>
	I1205 19:19:53.631316  549077 main.go:141] libmachine: (ha-106302-m02)       <model type='virtio'/>
	I1205 19:19:53.631331  549077 main.go:141] libmachine: (ha-106302-m02)     </interface>
	I1205 19:19:53.631344  549077 main.go:141] libmachine: (ha-106302-m02)     <interface type='network'>
	I1205 19:19:53.631354  549077 main.go:141] libmachine: (ha-106302-m02)       <source network='default'/>
	I1205 19:19:53.631367  549077 main.go:141] libmachine: (ha-106302-m02)       <model type='virtio'/>
	I1205 19:19:53.631376  549077 main.go:141] libmachine: (ha-106302-m02)     </interface>
	I1205 19:19:53.631384  549077 main.go:141] libmachine: (ha-106302-m02)     <serial type='pty'>
	I1205 19:19:53.631393  549077 main.go:141] libmachine: (ha-106302-m02)       <target port='0'/>
	I1205 19:19:53.631401  549077 main.go:141] libmachine: (ha-106302-m02)     </serial>
	I1205 19:19:53.631415  549077 main.go:141] libmachine: (ha-106302-m02)     <console type='pty'>
	I1205 19:19:53.631426  549077 main.go:141] libmachine: (ha-106302-m02)       <target type='serial' port='0'/>
	I1205 19:19:53.631434  549077 main.go:141] libmachine: (ha-106302-m02)     </console>
	I1205 19:19:53.631446  549077 main.go:141] libmachine: (ha-106302-m02)     <rng model='virtio'>
	I1205 19:19:53.631457  549077 main.go:141] libmachine: (ha-106302-m02)       <backend model='random'>/dev/random</backend>
	I1205 19:19:53.631468  549077 main.go:141] libmachine: (ha-106302-m02)     </rng>
	I1205 19:19:53.631474  549077 main.go:141] libmachine: (ha-106302-m02)     
	I1205 19:19:53.631496  549077 main.go:141] libmachine: (ha-106302-m02)     
	I1205 19:19:53.631509  549077 main.go:141] libmachine: (ha-106302-m02)   </devices>
	I1205 19:19:53.631522  549077 main.go:141] libmachine: (ha-106302-m02) </domain>
	I1205 19:19:53.631527  549077 main.go:141] libmachine: (ha-106302-m02) 
	I1205 19:19:53.638274  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:3d:5d:13 in network default
	I1205 19:19:53.638929  549077 main.go:141] libmachine: (ha-106302-m02) Ensuring networks are active...
	I1205 19:19:53.638948  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:53.639739  549077 main.go:141] libmachine: (ha-106302-m02) Ensuring network default is active
	I1205 19:19:53.639999  549077 main.go:141] libmachine: (ha-106302-m02) Ensuring network mk-ha-106302 is active
	I1205 19:19:53.640360  549077 main.go:141] libmachine: (ha-106302-m02) Getting domain xml...
	I1205 19:19:53.640970  549077 main.go:141] libmachine: (ha-106302-m02) Creating domain...
	I1205 19:19:54.858939  549077 main.go:141] libmachine: (ha-106302-m02) Waiting to get IP...
	I1205 19:19:54.859905  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:54.860367  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:54.860447  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:54.860358  549462 retry.go:31] will retry after 210.406566ms: waiting for machine to come up
	I1205 19:19:55.072865  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:55.073270  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:55.073303  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:55.073236  549462 retry.go:31] will retry after 380.564554ms: waiting for machine to come up
	I1205 19:19:55.456055  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:55.456633  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:55.456664  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:55.456575  549462 retry.go:31] will retry after 318.906554ms: waiting for machine to come up
	I1205 19:19:55.777216  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:55.777679  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:55.777710  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:55.777619  549462 retry.go:31] will retry after 557.622429ms: waiting for machine to come up
	I1205 19:19:56.337019  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:56.337517  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:56.337547  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:56.337452  549462 retry.go:31] will retry after 733.803738ms: waiting for machine to come up
	I1205 19:19:57.072993  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:57.073519  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:57.073554  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:57.073464  549462 retry.go:31] will retry after 792.053725ms: waiting for machine to come up
	I1205 19:19:57.866686  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:57.867255  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:57.867284  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:57.867204  549462 retry.go:31] will retry after 899.083916ms: waiting for machine to come up
	I1205 19:19:58.767474  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:58.767846  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:58.767879  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:58.767799  549462 retry.go:31] will retry after 894.520794ms: waiting for machine to come up
	I1205 19:19:59.663948  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:19:59.664483  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:19:59.664517  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:19:59.664431  549462 retry.go:31] will retry after 1.445971502s: waiting for machine to come up
	I1205 19:20:01.112081  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:01.112472  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:20:01.112497  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:20:01.112419  549462 retry.go:31] will retry after 2.114052847s: waiting for machine to come up
	I1205 19:20:03.228602  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:03.229091  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:20:03.229116  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:20:03.229037  549462 retry.go:31] will retry after 2.786335133s: waiting for machine to come up
	I1205 19:20:06.019023  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:06.019472  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:20:06.019494  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:20:06.019436  549462 retry.go:31] will retry after 3.312152878s: waiting for machine to come up
	I1205 19:20:09.332971  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:09.333454  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:20:09.333485  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:20:09.333375  549462 retry.go:31] will retry after 4.193621264s: waiting for machine to come up
	I1205 19:20:13.528190  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:13.528561  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find current IP address of domain ha-106302-m02 in network mk-ha-106302
	I1205 19:20:13.528582  549077 main.go:141] libmachine: (ha-106302-m02) DBG | I1205 19:20:13.528513  549462 retry.go:31] will retry after 5.505002432s: waiting for machine to come up
	I1205 19:20:19.035383  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:19.035839  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has current primary IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:19.035869  549077 main.go:141] libmachine: (ha-106302-m02) Found IP for machine: 192.168.39.22
	I1205 19:20:19.035884  549077 main.go:141] libmachine: (ha-106302-m02) Reserving static IP address...
	I1205 19:20:19.036316  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find host DHCP lease matching {name: "ha-106302-m02", mac: "52:54:00:50:91:17", ip: "192.168.39.22"} in network mk-ha-106302
	I1205 19:20:19.111128  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Getting to WaitForSSH function...
	I1205 19:20:19.111162  549077 main.go:141] libmachine: (ha-106302-m02) Reserved static IP address: 192.168.39.22
	I1205 19:20:19.111175  549077 main.go:141] libmachine: (ha-106302-m02) Waiting for SSH to be available...
	I1205 19:20:19.113732  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:19.114085  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302
	I1205 19:20:19.114114  549077 main.go:141] libmachine: (ha-106302-m02) DBG | unable to find defined IP address of network mk-ha-106302 interface with MAC address 52:54:00:50:91:17
	I1205 19:20:19.114257  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Using SSH client type: external
	I1205 19:20:19.114278  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa (-rw-------)
	I1205 19:20:19.114319  549077 main.go:141] libmachine: (ha-106302-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 19:20:19.114332  549077 main.go:141] libmachine: (ha-106302-m02) DBG | About to run SSH command:
	I1205 19:20:19.114349  549077 main.go:141] libmachine: (ha-106302-m02) DBG | exit 0
	I1205 19:20:19.118035  549077 main.go:141] libmachine: (ha-106302-m02) DBG | SSH cmd err, output: exit status 255: 
	I1205 19:20:19.118057  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1205 19:20:19.118065  549077 main.go:141] libmachine: (ha-106302-m02) DBG | command : exit 0
	I1205 19:20:19.118070  549077 main.go:141] libmachine: (ha-106302-m02) DBG | err     : exit status 255
	I1205 19:20:19.118077  549077 main.go:141] libmachine: (ha-106302-m02) DBG | output  : 
	I1205 19:20:22.120219  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Getting to WaitForSSH function...
	I1205 19:20:22.122541  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.122838  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.122871  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.122905  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Using SSH client type: external
	I1205 19:20:22.122934  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa (-rw-------)
	I1205 19:20:22.122975  549077 main.go:141] libmachine: (ha-106302-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.22 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 19:20:22.122988  549077 main.go:141] libmachine: (ha-106302-m02) DBG | About to run SSH command:
	I1205 19:20:22.122997  549077 main.go:141] libmachine: (ha-106302-m02) DBG | exit 0
	I1205 19:20:22.248910  549077 main.go:141] libmachine: (ha-106302-m02) DBG | SSH cmd err, output: <nil>: 
	I1205 19:20:22.249203  549077 main.go:141] libmachine: (ha-106302-m02) KVM machine creation complete!
	I1205 19:20:22.249549  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetConfigRaw
	I1205 19:20:22.250245  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:20:22.250531  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:20:22.250724  549077 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 19:20:22.250739  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetState
	I1205 19:20:22.252145  549077 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 19:20:22.252159  549077 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 19:20:22.252171  549077 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 19:20:22.252176  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:22.255218  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.255608  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.255639  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.255817  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:22.256017  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.256246  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.256424  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:22.256663  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:20:22.256916  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1205 19:20:22.256931  549077 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 19:20:22.368260  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:20:22.368313  549077 main.go:141] libmachine: Detecting the provisioner...
	I1205 19:20:22.368324  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:22.371040  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.371460  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.371481  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.371672  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:22.371891  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.372059  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.372173  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:22.372389  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:20:22.372564  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1205 19:20:22.372578  549077 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 19:20:22.485513  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 19:20:22.485607  549077 main.go:141] libmachine: found compatible host: buildroot
	I1205 19:20:22.485621  549077 main.go:141] libmachine: Provisioning with buildroot...
	I1205 19:20:22.485637  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetMachineName
	I1205 19:20:22.485917  549077 buildroot.go:166] provisioning hostname "ha-106302-m02"
	I1205 19:20:22.485951  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetMachineName
	I1205 19:20:22.486197  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:22.489137  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.489476  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.489498  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.489650  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:22.489844  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.489970  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.490109  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:22.490248  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:20:22.490464  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1205 19:20:22.490479  549077 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-106302-m02 && echo "ha-106302-m02" | sudo tee /etc/hostname
	I1205 19:20:22.616293  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-106302-m02
	
	I1205 19:20:22.616334  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:22.618960  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.619345  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.619376  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.619593  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:22.619776  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.619933  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.620106  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:22.620296  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:20:22.620475  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1205 19:20:22.620492  549077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-106302-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-106302-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-106302-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:20:22.738362  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:20:22.738404  549077 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 19:20:22.738463  549077 buildroot.go:174] setting up certificates
	I1205 19:20:22.738483  549077 provision.go:84] configureAuth start
	I1205 19:20:22.738504  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetMachineName
	I1205 19:20:22.738844  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetIP
	I1205 19:20:22.741581  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.741992  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.742022  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.742170  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:22.744256  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.744573  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.744600  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.744740  549077 provision.go:143] copyHostCerts
	I1205 19:20:22.744774  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:20:22.744818  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 19:20:22.744828  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:20:22.744891  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 19:20:22.744975  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:20:22.744994  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 19:20:22.745000  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:20:22.745024  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 19:20:22.745615  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:20:22.745684  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 19:20:22.745691  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:20:22.745739  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 19:20:22.745877  549077 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.ha-106302-m02 san=[127.0.0.1 192.168.39.22 ha-106302-m02 localhost minikube]
	I1205 19:20:22.796359  549077 provision.go:177] copyRemoteCerts
	I1205 19:20:22.796421  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:20:22.796448  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:22.799357  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.799732  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.799766  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.799995  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:22.800198  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.800385  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:22.800538  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa Username:docker}
	I1205 19:20:22.887828  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 19:20:22.887929  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 19:20:22.916212  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 19:20:22.916319  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 19:20:22.941232  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 19:20:22.941341  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 19:20:22.967161  549077 provision.go:87] duration metric: took 228.658819ms to configureAuth
	I1205 19:20:22.967199  549077 buildroot.go:189] setting minikube options for container-runtime
	I1205 19:20:22.967392  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:20:22.967485  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:22.970286  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.970715  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:22.970749  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:22.970939  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:22.971156  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.971320  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:22.971433  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:22.971580  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:20:22.971846  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1205 19:20:22.971863  549077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:20:23.207888  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:20:23.207924  549077 main.go:141] libmachine: Checking connection to Docker...
	I1205 19:20:23.207935  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetURL
	I1205 19:20:23.209276  549077 main.go:141] libmachine: (ha-106302-m02) DBG | Using libvirt version 6000000
	I1205 19:20:23.211506  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.211907  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:23.211936  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.212208  549077 main.go:141] libmachine: Docker is up and running!
	I1205 19:20:23.212224  549077 main.go:141] libmachine: Reticulating splines...
	I1205 19:20:23.212232  549077 client.go:171] duration metric: took 30.053657655s to LocalClient.Create
	I1205 19:20:23.212256  549077 start.go:167] duration metric: took 30.053742841s to libmachine.API.Create "ha-106302"
	I1205 19:20:23.212293  549077 start.go:293] postStartSetup for "ha-106302-m02" (driver="kvm2")
	I1205 19:20:23.212310  549077 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:20:23.212333  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:20:23.212577  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:20:23.212606  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:23.215114  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.215516  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:23.215546  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.215705  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:23.215924  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:23.216106  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:23.216253  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa Username:docker}
	I1205 19:20:23.304000  549077 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:20:23.308581  549077 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 19:20:23.308614  549077 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 19:20:23.308698  549077 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 19:20:23.308795  549077 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 19:20:23.308810  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /etc/ssl/certs/5381862.pem
	I1205 19:20:23.308927  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 19:20:23.319412  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:20:23.344460  549077 start.go:296] duration metric: took 132.146002ms for postStartSetup
	I1205 19:20:23.344545  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetConfigRaw
	I1205 19:20:23.345277  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetIP
	I1205 19:20:23.348207  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.348665  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:23.348693  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.348984  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:20:23.349202  549077 start.go:128] duration metric: took 30.210071126s to createHost
	I1205 19:20:23.349267  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:23.351860  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.352216  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:23.352247  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.352437  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:23.352631  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:23.352819  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:23.352959  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:23.353129  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:20:23.353382  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1205 19:20:23.353399  549077 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 19:20:23.465312  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733426423.446273328
	
	I1205 19:20:23.465337  549077 fix.go:216] guest clock: 1733426423.446273328
	I1205 19:20:23.465346  549077 fix.go:229] Guest: 2024-12-05 19:20:23.446273328 +0000 UTC Remote: 2024-12-05 19:20:23.349227376 +0000 UTC m=+77.722963766 (delta=97.045952ms)
	I1205 19:20:23.465364  549077 fix.go:200] guest clock delta is within tolerance: 97.045952ms
	I1205 19:20:23.465370  549077 start.go:83] releasing machines lock for "ha-106302-m02", held for 30.326335436s
	I1205 19:20:23.465398  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:20:23.465708  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetIP
	I1205 19:20:23.468308  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.468731  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:23.468764  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.471281  549077 out.go:177] * Found network options:
	I1205 19:20:23.472818  549077 out.go:177]   - NO_PROXY=192.168.39.185
	W1205 19:20:23.473976  549077 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 19:20:23.474014  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:20:23.474583  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:20:23.474762  549077 main.go:141] libmachine: (ha-106302-m02) Calling .DriverName
	I1205 19:20:23.474896  549077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:20:23.474942  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	W1205 19:20:23.474975  549077 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 19:20:23.475049  549077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:20:23.475075  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHHostname
	I1205 19:20:23.477606  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.477936  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.477969  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:23.477989  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.478113  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:23.478273  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:23.478379  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:23.478405  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:23.478432  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:23.478613  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHPort
	I1205 19:20:23.478614  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa Username:docker}
	I1205 19:20:23.478752  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHKeyPath
	I1205 19:20:23.478903  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetSSHUsername
	I1205 19:20:23.479088  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m02/id_rsa Username:docker}
	I1205 19:20:23.717492  549077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 19:20:23.724398  549077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 19:20:23.724467  549077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:20:23.742377  549077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 19:20:23.742416  549077 start.go:495] detecting cgroup driver to use...
	I1205 19:20:23.742481  549077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:20:23.759474  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:20:23.774720  549077 docker.go:217] disabling cri-docker service (if available) ...
	I1205 19:20:23.774808  549077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:20:23.790887  549077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:20:23.807005  549077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:20:23.919834  549077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:20:24.073552  549077 docker.go:233] disabling docker service ...
	I1205 19:20:24.073644  549077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:20:24.088648  549077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:20:24.103156  549077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:20:24.227966  549077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:20:24.343808  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:20:24.359016  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:20:24.378372  549077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 19:20:24.378434  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:20:24.390093  549077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:20:24.390163  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:20:24.402052  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:20:24.413868  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:20:24.425063  549077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:20:24.436756  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:20:24.448351  549077 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:20:24.466246  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:20:24.477646  549077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:20:24.487958  549077 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 19:20:24.488022  549077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 19:20:24.504864  549077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:20:24.516929  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:20:24.650055  549077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:20:24.749984  549077 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:20:24.750068  549077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:20:24.754929  549077 start.go:563] Will wait 60s for crictl version
	I1205 19:20:24.754993  549077 ssh_runner.go:195] Run: which crictl
	I1205 19:20:24.758880  549077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:20:24.803432  549077 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 19:20:24.803519  549077 ssh_runner.go:195] Run: crio --version
	I1205 19:20:24.832773  549077 ssh_runner.go:195] Run: crio --version
	I1205 19:20:24.866071  549077 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 19:20:24.867336  549077 out.go:177]   - env NO_PROXY=192.168.39.185
	I1205 19:20:24.868566  549077 main.go:141] libmachine: (ha-106302-m02) Calling .GetIP
	I1205 19:20:24.871432  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:24.871918  549077 main.go:141] libmachine: (ha-106302-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:91:17", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:20:09 +0000 UTC Type:0 Mac:52:54:00:50:91:17 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-106302-m02 Clientid:01:52:54:00:50:91:17}
	I1205 19:20:24.871951  549077 main.go:141] libmachine: (ha-106302-m02) DBG | domain ha-106302-m02 has defined IP address 192.168.39.22 and MAC address 52:54:00:50:91:17 in network mk-ha-106302
	I1205 19:20:24.872171  549077 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 19:20:24.876554  549077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:20:24.890047  549077 mustload.go:65] Loading cluster: ha-106302
	I1205 19:20:24.890241  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:20:24.890558  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:20:24.890603  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:20:24.905579  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32811
	I1205 19:20:24.906049  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:20:24.906603  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:20:24.906625  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:20:24.906945  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:20:24.907214  549077 main.go:141] libmachine: (ha-106302) Calling .GetState
	I1205 19:20:24.908815  549077 host.go:66] Checking if "ha-106302" exists ...
	I1205 19:20:24.909241  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:20:24.909290  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:20:24.924888  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35263
	I1205 19:20:24.925342  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:20:24.925844  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:20:24.925864  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:20:24.926328  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:20:24.926542  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:20:24.926741  549077 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302 for IP: 192.168.39.22
	I1205 19:20:24.926754  549077 certs.go:194] generating shared ca certs ...
	I1205 19:20:24.926770  549077 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:20:24.926902  549077 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 19:20:24.926939  549077 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 19:20:24.926948  549077 certs.go:256] generating profile certs ...
	I1205 19:20:24.927023  549077 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key
	I1205 19:20:24.927047  549077 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.842d328c
	I1205 19:20:24.927061  549077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.842d328c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.185 192.168.39.22 192.168.39.254]
	I1205 19:20:25.018998  549077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.842d328c ...
	I1205 19:20:25.019030  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.842d328c: {Name:mkb73e87a5bbbf4f4c79d1fb041b857c135f5f2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:20:25.019217  549077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.842d328c ...
	I1205 19:20:25.019230  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.842d328c: {Name:mk2fba0e13caab29e22d03865232eceeba478b3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:20:25.019304  549077 certs.go:381] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.842d328c -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt
	I1205 19:20:25.019444  549077 certs.go:385] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.842d328c -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key
	I1205 19:20:25.019581  549077 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key
	I1205 19:20:25.019598  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 19:20:25.019611  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 19:20:25.019630  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 19:20:25.019645  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 19:20:25.019658  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 19:20:25.019670  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 19:20:25.019681  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 19:20:25.019693  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 19:20:25.019742  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 19:20:25.019769  549077 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 19:20:25.019780  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 19:20:25.019800  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 19:20:25.019822  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:20:25.019843  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 19:20:25.019881  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:20:25.019905  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /usr/share/ca-certificates/5381862.pem
	I1205 19:20:25.019919  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:20:25.019931  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem -> /usr/share/ca-certificates/538186.pem
	I1205 19:20:25.019965  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:20:25.022938  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:20:25.023319  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:20:25.023341  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:20:25.023553  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:20:25.023832  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:20:25.024047  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:20:25.024204  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:20:25.100678  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1205 19:20:25.110731  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1205 19:20:25.125160  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1205 19:20:25.130012  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1205 19:20:25.140972  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1205 19:20:25.146148  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1205 19:20:25.157617  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1205 19:20:25.162172  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1205 19:20:25.173149  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1205 19:20:25.178465  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1205 19:20:25.189406  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1205 19:20:25.193722  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1205 19:20:25.206028  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:20:25.233287  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 19:20:25.261305  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:20:25.289482  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 19:20:25.316415  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1205 19:20:25.342226  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 19:20:25.368246  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:20:25.393426  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 19:20:25.419609  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 19:20:25.445786  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:20:25.469979  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 19:20:25.493824  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1205 19:20:25.510843  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1205 19:20:25.527645  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1205 19:20:25.545705  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1205 19:20:25.563452  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1205 19:20:25.580089  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1205 19:20:25.596848  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1205 19:20:25.613807  549077 ssh_runner.go:195] Run: openssl version
	I1205 19:20:25.619697  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 19:20:25.630983  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 19:20:25.635623  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 19:20:25.635686  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 19:20:25.641677  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 19:20:25.653239  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:20:25.664932  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:20:25.669827  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:20:25.669897  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:20:25.675619  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:20:25.687127  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 19:20:25.698338  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 19:20:25.702836  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 19:20:25.702900  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 19:20:25.708667  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 19:20:25.720085  549077 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 19:20:25.724316  549077 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 19:20:25.724377  549077 kubeadm.go:934] updating node {m02 192.168.39.22 8443 v1.31.2 crio true true} ...
	I1205 19:20:25.724468  549077 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-106302-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 19:20:25.724495  549077 kube-vip.go:115] generating kube-vip config ...
	I1205 19:20:25.724527  549077 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 19:20:25.742381  549077 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 19:20:25.742481  549077 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1205 19:20:25.742576  549077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 19:20:25.753160  549077 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1205 19:20:25.753241  549077 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1205 19:20:25.763396  549077 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1205 19:20:25.763426  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 19:20:25.763482  549077 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 19:20:25.763508  549077 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1205 19:20:25.763539  549077 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1205 19:20:25.767948  549077 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1205 19:20:25.767974  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1205 19:20:27.082938  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 19:20:27.083030  549077 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 19:20:27.089029  549077 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1205 19:20:27.089083  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1205 19:20:27.157306  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:20:27.187033  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 19:20:27.187142  549077 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 19:20:27.195317  549077 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1205 19:20:27.195366  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1205 19:20:27.686796  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1205 19:20:27.697152  549077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1205 19:20:27.715018  549077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 19:20:27.734908  549077 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1205 19:20:27.752785  549077 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 19:20:27.756906  549077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:20:27.769582  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:20:27.907328  549077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:20:27.931860  549077 host.go:66] Checking if "ha-106302" exists ...
	I1205 19:20:27.932222  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:20:27.932282  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:20:27.948463  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40951
	I1205 19:20:27.949044  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:20:27.949565  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:20:27.949592  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:20:27.949925  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:20:27.950146  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:20:27.950314  549077 start.go:317] joinCluster: &{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:20:27.950422  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1205 19:20:27.950440  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:20:27.953425  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:20:27.953881  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:20:27.953912  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:20:27.954070  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:20:27.954316  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:20:27.954453  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:20:27.954606  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:20:28.113909  549077 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:20:28.113956  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kqxul8.esbt6vl0oo3pylcw --discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-106302-m02 --control-plane --apiserver-advertise-address=192.168.39.22 --apiserver-bind-port=8443"
	I1205 19:20:49.921346  549077 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kqxul8.esbt6vl0oo3pylcw --discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-106302-m02 --control-plane --apiserver-advertise-address=192.168.39.22 --apiserver-bind-port=8443": (21.80735449s)
	I1205 19:20:49.921399  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1205 19:20:50.372592  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-106302-m02 minikube.k8s.io/updated_at=2024_12_05T19_20_50_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=ha-106302 minikube.k8s.io/primary=false
	I1205 19:20:50.546557  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-106302-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1205 19:20:50.670851  549077 start.go:319] duration metric: took 22.720530002s to joinCluster
	I1205 19:20:50.670996  549077 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:20:50.671311  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:20:50.672473  549077 out.go:177] * Verifying Kubernetes components...
	I1205 19:20:50.673807  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:20:50.984620  549077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:20:51.019677  549077 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:20:51.020052  549077 kapi.go:59] client config for ha-106302: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.crt", KeyFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key", CAFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1205 19:20:51.020153  549077 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.185:8443
	I1205 19:20:51.020526  549077 node_ready.go:35] waiting up to 6m0s for node "ha-106302-m02" to be "Ready" ...
	I1205 19:20:51.020686  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:51.020701  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:51.020713  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:51.020723  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:51.041602  549077 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I1205 19:20:51.521579  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:51.521608  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:51.521618  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:51.521624  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:51.528072  549077 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 19:20:52.021672  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:52.021725  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:52.021737  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:52.021745  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:52.033142  549077 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1205 19:20:52.521343  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:52.521374  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:52.521385  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:52.521392  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:52.538251  549077 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1205 19:20:53.021297  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:53.021332  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:53.021341  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:53.021348  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:53.024986  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:53.025544  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:20:53.521241  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:53.521267  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:53.521276  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:53.521280  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:53.524346  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:54.021533  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:54.021555  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:54.021563  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:54.021566  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:54.024867  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:54.521530  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:54.521559  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:54.521573  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:54.521579  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:54.525086  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:55.020940  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:55.020967  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:55.020978  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:55.020982  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:55.024965  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:55.521541  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:55.521567  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:55.521578  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:55.521583  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:55.524843  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:55.525513  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:20:56.021561  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:56.021592  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:56.021605  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:56.021613  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:56.032511  549077 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1205 19:20:56.521545  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:56.521569  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:56.521578  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:56.521582  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:56.525173  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:57.021393  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:57.021418  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:57.021428  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:57.021452  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:57.024653  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:57.521602  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:57.521630  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:57.521642  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:57.521648  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:57.524714  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:58.021076  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:58.021102  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:58.021111  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:58.021115  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:58.024741  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:58.025390  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:20:58.521263  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:58.521301  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:58.521311  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:58.521316  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:58.524604  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:59.021545  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:59.021570  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:59.021579  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:59.021585  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:59.025044  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:20:59.521104  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:20:59.521130  549077 round_trippers.go:469] Request Headers:
	I1205 19:20:59.521139  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:20:59.521142  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:20:59.524601  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:00.021726  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:00.021752  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:00.021761  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:00.021765  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:00.025155  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:00.025976  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:21:00.521405  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:00.521429  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:00.521438  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:00.521443  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:00.524889  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:01.021527  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:01.021552  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:01.021564  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:01.021570  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:01.025273  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:01.521362  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:01.521386  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:01.521395  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:01.521400  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:01.525347  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:02.021591  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:02.021615  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:02.021624  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:02.021629  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:02.025220  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:02.521521  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:02.521548  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:02.521557  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:02.521562  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:02.524828  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:02.525818  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:21:03.021696  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:03.021722  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:03.021731  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:03.021735  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:03.025467  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:03.521081  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:03.521106  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:03.521115  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:03.521118  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:03.525582  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:21:04.021546  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:04.021570  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:04.021579  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:04.021583  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:04.025004  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:04.520903  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:04.520929  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:04.520937  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:04.520942  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:04.524427  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:05.021518  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:05.021545  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:05.021554  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:05.021557  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:05.025066  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:05.025792  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:21:05.520844  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:05.520870  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:05.520880  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:05.520885  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:05.524450  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:06.021705  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:06.021737  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:06.021750  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:06.021757  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:06.028871  549077 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1205 19:21:06.520789  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:06.520815  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:06.520824  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:06.520829  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:06.524081  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:07.021065  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:07.021090  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:07.021099  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:07.021104  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:07.025141  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:21:07.521099  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:07.521129  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:07.521139  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:07.521142  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:07.524645  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:07.525369  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:21:08.021173  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:08.021197  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:08.021205  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:08.021211  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:08.024992  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:08.520960  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:08.520986  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:08.520994  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:08.521000  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:08.526502  549077 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 19:21:09.021508  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:09.021532  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:09.021541  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:09.021545  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:09.024675  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:09.521594  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:09.521619  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:09.521628  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:09.521631  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:09.525284  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:09.525956  549077 node_ready.go:53] node "ha-106302-m02" has status "Ready":"False"
	I1205 19:21:10.021222  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:10.021257  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.021266  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.021271  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.024522  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:10.025029  549077 node_ready.go:49] node "ha-106302-m02" has status "Ready":"True"
	I1205 19:21:10.025048  549077 node_ready.go:38] duration metric: took 19.004494335s for node "ha-106302-m02" to be "Ready" ...
	I1205 19:21:10.025058  549077 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:21:10.025143  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:21:10.025161  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.025168  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.025172  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.029254  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:21:10.037343  549077 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-45m77" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.037449  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-45m77
	I1205 19:21:10.037458  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.037466  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.037471  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.041083  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:10.041839  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:10.041858  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.041871  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.041877  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.045415  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:10.045998  549077 pod_ready.go:93] pod "coredns-7c65d6cfc9-45m77" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:10.046023  549077 pod_ready.go:82] duration metric: took 8.64868ms for pod "coredns-7c65d6cfc9-45m77" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.046036  549077 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sjsv2" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.046126  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sjsv2
	I1205 19:21:10.046137  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.046148  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.046157  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.048885  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:21:10.049682  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:10.049701  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.049711  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.049719  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.052106  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:21:10.052838  549077 pod_ready.go:93] pod "coredns-7c65d6cfc9-sjsv2" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:10.052859  549077 pod_ready.go:82] duration metric: took 6.814644ms for pod "coredns-7c65d6cfc9-sjsv2" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.052870  549077 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.052943  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/etcd-ha-106302
	I1205 19:21:10.052958  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.052969  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.052977  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.055429  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:21:10.056066  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:10.056082  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.056091  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.056098  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.058521  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:21:10.059123  549077 pod_ready.go:93] pod "etcd-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:10.059143  549077 pod_ready.go:82] duration metric: took 6.26496ms for pod "etcd-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.059152  549077 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.059214  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/etcd-ha-106302-m02
	I1205 19:21:10.059222  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.059229  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.059234  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.061697  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:21:10.062341  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:10.062358  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.062365  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.062369  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.064629  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:21:10.065300  549077 pod_ready.go:93] pod "etcd-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:10.065321  549077 pod_ready.go:82] duration metric: took 6.163254ms for pod "etcd-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.065335  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.221800  549077 request.go:632] Waited for 156.353212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302
	I1205 19:21:10.221879  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302
	I1205 19:21:10.221887  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.221896  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.221902  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.225800  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:10.421906  549077 request.go:632] Waited for 195.38917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:10.421986  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:10.421994  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.422009  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.422020  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.425349  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:10.426055  549077 pod_ready.go:93] pod "kube-apiserver-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:10.426080  549077 pod_ready.go:82] duration metric: took 360.734464ms for pod "kube-apiserver-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.426094  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.622166  549077 request.go:632] Waited for 195.985328ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302-m02
	I1205 19:21:10.622258  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302-m02
	I1205 19:21:10.622264  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.622274  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.622278  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.626000  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:10.822214  549077 request.go:632] Waited for 195.406875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:10.822287  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:10.822292  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:10.822300  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:10.822313  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:10.825573  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:10.826254  549077 pod_ready.go:93] pod "kube-apiserver-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:10.826276  549077 pod_ready.go:82] duration metric: took 400.173601ms for pod "kube-apiserver-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:10.826290  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:11.021260  549077 request.go:632] Waited for 194.873219ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302
	I1205 19:21:11.021346  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302
	I1205 19:21:11.021355  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:11.021363  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:11.021370  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:11.024811  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:11.221934  549077 request.go:632] Waited for 196.368194ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:11.222013  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:11.222048  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:11.222064  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:11.222069  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:11.226121  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:21:11.226777  549077 pod_ready.go:93] pod "kube-controller-manager-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:11.226804  549077 pod_ready.go:82] duration metric: took 400.496709ms for pod "kube-controller-manager-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:11.226817  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:11.421793  549077 request.go:632] Waited for 194.889039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302-m02
	I1205 19:21:11.421939  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302-m02
	I1205 19:21:11.421953  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:11.421962  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:11.421966  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:11.425791  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:11.621786  549077 request.go:632] Waited for 195.325808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:11.621884  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:11.621897  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:11.621912  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:11.621921  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:11.626156  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:21:11.626616  549077 pod_ready.go:93] pod "kube-controller-manager-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:11.626639  549077 pod_ready.go:82] duration metric: took 399.812324ms for pod "kube-controller-manager-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:11.626651  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n57lf" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:11.821729  549077 request.go:632] Waited for 194.997004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n57lf
	I1205 19:21:11.821817  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n57lf
	I1205 19:21:11.821822  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:11.821831  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:11.821838  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:11.825718  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:12.021841  549077 request.go:632] Waited for 195.410535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:12.021958  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:12.021969  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:12.021977  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:12.021984  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:12.025441  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:12.025999  549077 pod_ready.go:93] pod "kube-proxy-n57lf" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:12.026021  549077 pod_ready.go:82] duration metric: took 399.361827ms for pod "kube-proxy-n57lf" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:12.026047  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zw6nj" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:12.222118  549077 request.go:632] Waited for 195.969624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zw6nj
	I1205 19:21:12.222187  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zw6nj
	I1205 19:21:12.222192  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:12.222200  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:12.222204  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:12.225785  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:12.422070  549077 request.go:632] Waited for 195.377811ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:12.422132  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:12.422137  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:12.422145  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:12.422149  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:12.426002  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:12.426709  549077 pod_ready.go:93] pod "kube-proxy-zw6nj" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:12.426735  549077 pod_ready.go:82] duration metric: took 400.678816ms for pod "kube-proxy-zw6nj" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:12.426748  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:12.621608  549077 request.go:632] Waited for 194.758143ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302
	I1205 19:21:12.621678  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302
	I1205 19:21:12.621683  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:12.621691  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:12.621699  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:12.625056  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:12.822084  549077 request.go:632] Waited for 196.278548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:12.822154  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:21:12.822166  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:12.822175  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:12.822178  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:12.826187  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:12.827028  549077 pod_ready.go:93] pod "kube-scheduler-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:12.827048  549077 pod_ready.go:82] duration metric: took 400.290627ms for pod "kube-scheduler-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:12.827061  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:13.021645  549077 request.go:632] Waited for 194.500049ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302-m02
	I1205 19:21:13.021737  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302-m02
	I1205 19:21:13.021746  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:13.021787  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:13.021795  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:13.025431  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:13.221555  549077 request.go:632] Waited for 195.53176ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:13.221632  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:21:13.221641  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:13.221652  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:13.221657  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:13.226002  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:21:13.226628  549077 pod_ready.go:93] pod "kube-scheduler-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:21:13.226651  549077 pod_ready.go:82] duration metric: took 399.582286ms for pod "kube-scheduler-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:21:13.226663  549077 pod_ready.go:39] duration metric: took 3.201594435s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:21:13.226683  549077 api_server.go:52] waiting for apiserver process to appear ...
	I1205 19:21:13.226740  549077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 19:21:13.244668  549077 api_server.go:72] duration metric: took 22.573625009s to wait for apiserver process to appear ...
	I1205 19:21:13.244706  549077 api_server.go:88] waiting for apiserver healthz status ...
	I1205 19:21:13.244737  549077 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I1205 19:21:13.252149  549077 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I1205 19:21:13.252242  549077 round_trippers.go:463] GET https://192.168.39.185:8443/version
	I1205 19:21:13.252252  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:13.252260  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:13.252283  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:13.253152  549077 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1205 19:21:13.253251  549077 api_server.go:141] control plane version: v1.31.2
	I1205 19:21:13.253269  549077 api_server.go:131] duration metric: took 8.556554ms to wait for apiserver health ...
	I1205 19:21:13.253277  549077 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 19:21:13.421707  549077 request.go:632] Waited for 168.323563ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:21:13.421778  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:21:13.421784  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:13.421803  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:13.421808  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:13.428060  549077 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 19:21:13.433027  549077 system_pods.go:59] 17 kube-system pods found
	I1205 19:21:13.433063  549077 system_pods.go:61] "coredns-7c65d6cfc9-45m77" [88196078-5292-43dc-84b2-dc53af435e5c] Running
	I1205 19:21:13.433069  549077 system_pods.go:61] "coredns-7c65d6cfc9-sjsv2" [b686cbc5-1b4f-44ea-89cb-70063b687718] Running
	I1205 19:21:13.433073  549077 system_pods.go:61] "etcd-ha-106302" [b0c81234-5186-4812-a1a2-4f035f9efabf] Running
	I1205 19:21:13.433076  549077 system_pods.go:61] "etcd-ha-106302-m02" [8c619411-697a-4eb0-8725-27811a17aba1] Running
	I1205 19:21:13.433079  549077 system_pods.go:61] "kindnet-thcsp" [e2eec41c-3ca9-42ff-801d-dfdf05f6eab2] Running
	I1205 19:21:13.433083  549077 system_pods.go:61] "kindnet-xr9mh" [2044800c-f517-439e-810b-71a114cb044e] Running
	I1205 19:21:13.433087  549077 system_pods.go:61] "kube-apiserver-ha-106302" [688ddac9-2f42-4e6b-b9e8-a9c967a7180b] Running
	I1205 19:21:13.433090  549077 system_pods.go:61] "kube-apiserver-ha-106302-m02" [ad05d27e-72e0-443e-8ad3-2d464c116f27] Running
	I1205 19:21:13.433094  549077 system_pods.go:61] "kube-controller-manager-ha-106302" [e63c5a4d-c327-4040-b679-62b5b06abec9] Running
	I1205 19:21:13.433097  549077 system_pods.go:61] "kube-controller-manager-ha-106302-m02" [fe707148-d0c6-4de3-841f-3a8143fa9217] Running
	I1205 19:21:13.433101  549077 system_pods.go:61] "kube-proxy-n57lf" [94819792-89fc-4a70-a54f-02e594b657bf] Running
	I1205 19:21:13.433104  549077 system_pods.go:61] "kube-proxy-zw6nj" [d35e1426-9151-4eb3-95fd-c2b36c126b51] Running
	I1205 19:21:13.433107  549077 system_pods.go:61] "kube-scheduler-ha-106302" [6dd32258-0ba3-4f79-8d4b-165b918bbc36] Running
	I1205 19:21:13.433110  549077 system_pods.go:61] "kube-scheduler-ha-106302-m02" [b94b6bf9-4639-47d1-92be-0cbba44e65f3] Running
	I1205 19:21:13.433114  549077 system_pods.go:61] "kube-vip-ha-106302" [03b99453-c78d-4aaf-93e8-7011ae363db4] Running
	I1205 19:21:13.433119  549077 system_pods.go:61] "kube-vip-ha-106302-m02" [2ec94818-bc15-4d60-95b4-e7f7235f0341] Running
	I1205 19:21:13.433125  549077 system_pods.go:61] "storage-provisioner" [88d6e224-b304-4f84-a162-9803400c9acf] Running
	I1205 19:21:13.433131  549077 system_pods.go:74] duration metric: took 179.848181ms to wait for pod list to return data ...
	I1205 19:21:13.433140  549077 default_sa.go:34] waiting for default service account to be created ...
	I1205 19:21:13.621481  549077 request.go:632] Waited for 188.228658ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/default/serviceaccounts
	I1205 19:21:13.621548  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/default/serviceaccounts
	I1205 19:21:13.621554  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:13.621562  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:13.621566  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:13.625432  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:21:13.625697  549077 default_sa.go:45] found service account: "default"
	I1205 19:21:13.625716  549077 default_sa.go:55] duration metric: took 192.568863ms for default service account to be created ...
	I1205 19:21:13.625725  549077 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 19:21:13.821886  549077 request.go:632] Waited for 196.082261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:21:13.821977  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:21:13.821988  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:13.821997  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:13.822001  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:13.828461  549077 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 19:21:13.834834  549077 system_pods.go:86] 17 kube-system pods found
	I1205 19:21:13.834869  549077 system_pods.go:89] "coredns-7c65d6cfc9-45m77" [88196078-5292-43dc-84b2-dc53af435e5c] Running
	I1205 19:21:13.834877  549077 system_pods.go:89] "coredns-7c65d6cfc9-sjsv2" [b686cbc5-1b4f-44ea-89cb-70063b687718] Running
	I1205 19:21:13.834882  549077 system_pods.go:89] "etcd-ha-106302" [b0c81234-5186-4812-a1a2-4f035f9efabf] Running
	I1205 19:21:13.834886  549077 system_pods.go:89] "etcd-ha-106302-m02" [8c619411-697a-4eb0-8725-27811a17aba1] Running
	I1205 19:21:13.834890  549077 system_pods.go:89] "kindnet-thcsp" [e2eec41c-3ca9-42ff-801d-dfdf05f6eab2] Running
	I1205 19:21:13.834894  549077 system_pods.go:89] "kindnet-xr9mh" [2044800c-f517-439e-810b-71a114cb044e] Running
	I1205 19:21:13.834898  549077 system_pods.go:89] "kube-apiserver-ha-106302" [688ddac9-2f42-4e6b-b9e8-a9c967a7180b] Running
	I1205 19:21:13.834901  549077 system_pods.go:89] "kube-apiserver-ha-106302-m02" [ad05d27e-72e0-443e-8ad3-2d464c116f27] Running
	I1205 19:21:13.834905  549077 system_pods.go:89] "kube-controller-manager-ha-106302" [e63c5a4d-c327-4040-b679-62b5b06abec9] Running
	I1205 19:21:13.834909  549077 system_pods.go:89] "kube-controller-manager-ha-106302-m02" [fe707148-d0c6-4de3-841f-3a8143fa9217] Running
	I1205 19:21:13.834912  549077 system_pods.go:89] "kube-proxy-n57lf" [94819792-89fc-4a70-a54f-02e594b657bf] Running
	I1205 19:21:13.834915  549077 system_pods.go:89] "kube-proxy-zw6nj" [d35e1426-9151-4eb3-95fd-c2b36c126b51] Running
	I1205 19:21:13.834919  549077 system_pods.go:89] "kube-scheduler-ha-106302" [6dd32258-0ba3-4f79-8d4b-165b918bbc36] Running
	I1205 19:21:13.834924  549077 system_pods.go:89] "kube-scheduler-ha-106302-m02" [b94b6bf9-4639-47d1-92be-0cbba44e65f3] Running
	I1205 19:21:13.834928  549077 system_pods.go:89] "kube-vip-ha-106302" [03b99453-c78d-4aaf-93e8-7011ae363db4] Running
	I1205 19:21:13.834935  549077 system_pods.go:89] "kube-vip-ha-106302-m02" [2ec94818-bc15-4d60-95b4-e7f7235f0341] Running
	I1205 19:21:13.834939  549077 system_pods.go:89] "storage-provisioner" [88d6e224-b304-4f84-a162-9803400c9acf] Running
	I1205 19:21:13.834946  549077 system_pods.go:126] duration metric: took 209.215629ms to wait for k8s-apps to be running ...
	I1205 19:21:13.834957  549077 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 19:21:13.835009  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:21:13.850235  549077 system_svc.go:56] duration metric: took 15.264777ms WaitForService to wait for kubelet
	I1205 19:21:13.850283  549077 kubeadm.go:582] duration metric: took 23.179247512s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:21:13.850305  549077 node_conditions.go:102] verifying NodePressure condition ...
	I1205 19:21:14.021757  549077 request.go:632] Waited for 171.347316ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes
	I1205 19:21:14.021833  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes
	I1205 19:21:14.021840  549077 round_trippers.go:469] Request Headers:
	I1205 19:21:14.021850  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:21:14.021860  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:21:14.026541  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:21:14.027820  549077 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 19:21:14.027846  549077 node_conditions.go:123] node cpu capacity is 2
	I1205 19:21:14.027863  549077 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 19:21:14.027868  549077 node_conditions.go:123] node cpu capacity is 2
	I1205 19:21:14.027874  549077 node_conditions.go:105] duration metric: took 177.564002ms to run NodePressure ...
	I1205 19:21:14.027887  549077 start.go:241] waiting for startup goroutines ...
	I1205 19:21:14.027919  549077 start.go:255] writing updated cluster config ...
	I1205 19:21:14.029921  549077 out.go:201] 
	I1205 19:21:14.031474  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:21:14.031571  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:21:14.033173  549077 out.go:177] * Starting "ha-106302-m03" control-plane node in "ha-106302" cluster
	I1205 19:21:14.034362  549077 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:21:14.034386  549077 cache.go:56] Caching tarball of preloaded images
	I1205 19:21:14.034498  549077 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 19:21:14.034514  549077 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 19:21:14.034605  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:21:14.034796  549077 start.go:360] acquireMachinesLock for ha-106302-m03: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 19:21:14.034842  549077 start.go:364] duration metric: took 26.337µs to acquireMachinesLock for "ha-106302-m03"
	I1205 19:21:14.034860  549077 start.go:93] Provisioning new machine with config: &{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:21:14.034960  549077 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1205 19:21:14.036589  549077 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 19:21:14.036698  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:21:14.036753  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:21:14.052449  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36769
	I1205 19:21:14.052905  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:21:14.053431  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:21:14.053458  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:21:14.053758  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:21:14.053945  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetMachineName
	I1205 19:21:14.054107  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:14.054258  549077 start.go:159] libmachine.API.Create for "ha-106302" (driver="kvm2")
	I1205 19:21:14.054297  549077 client.go:168] LocalClient.Create starting
	I1205 19:21:14.054348  549077 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem
	I1205 19:21:14.054391  549077 main.go:141] libmachine: Decoding PEM data...
	I1205 19:21:14.054413  549077 main.go:141] libmachine: Parsing certificate...
	I1205 19:21:14.054484  549077 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem
	I1205 19:21:14.054515  549077 main.go:141] libmachine: Decoding PEM data...
	I1205 19:21:14.054536  549077 main.go:141] libmachine: Parsing certificate...
	I1205 19:21:14.054563  549077 main.go:141] libmachine: Running pre-create checks...
	I1205 19:21:14.054575  549077 main.go:141] libmachine: (ha-106302-m03) Calling .PreCreateCheck
	I1205 19:21:14.054725  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetConfigRaw
	I1205 19:21:14.055103  549077 main.go:141] libmachine: Creating machine...
	I1205 19:21:14.055117  549077 main.go:141] libmachine: (ha-106302-m03) Calling .Create
	I1205 19:21:14.055267  549077 main.go:141] libmachine: (ha-106302-m03) Creating KVM machine...
	I1205 19:21:14.056572  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found existing default KVM network
	I1205 19:21:14.056653  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found existing private KVM network mk-ha-106302
	I1205 19:21:14.056780  549077 main.go:141] libmachine: (ha-106302-m03) Setting up store path in /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03 ...
	I1205 19:21:14.056804  549077 main.go:141] libmachine: (ha-106302-m03) Building disk image from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 19:21:14.056850  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:14.056773  549869 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:21:14.056935  549077 main.go:141] libmachine: (ha-106302-m03) Downloading /home/jenkins/minikube-integration/20052-530897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 19:21:14.349600  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:14.349456  549869 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/id_rsa...
	I1205 19:21:14.429525  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:14.429393  549869 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/ha-106302-m03.rawdisk...
	I1205 19:21:14.429558  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Writing magic tar header
	I1205 19:21:14.429573  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Writing SSH key tar header
	I1205 19:21:14.429586  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:14.429511  549869 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03 ...
	I1205 19:21:14.429599  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03
	I1205 19:21:14.429612  549077 main.go:141] libmachine: (ha-106302-m03) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03 (perms=drwx------)
	I1205 19:21:14.429633  549077 main.go:141] libmachine: (ha-106302-m03) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines (perms=drwxr-xr-x)
	I1205 19:21:14.429648  549077 main.go:141] libmachine: (ha-106302-m03) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube (perms=drwxr-xr-x)
	I1205 19:21:14.429664  549077 main.go:141] libmachine: (ha-106302-m03) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897 (perms=drwxrwxr-x)
	I1205 19:21:14.429734  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines
	I1205 19:21:14.429769  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:21:14.429779  549077 main.go:141] libmachine: (ha-106302-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 19:21:14.429798  549077 main.go:141] libmachine: (ha-106302-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 19:21:14.429808  549077 main.go:141] libmachine: (ha-106302-m03) Creating domain...
	I1205 19:21:14.429823  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897
	I1205 19:21:14.429833  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 19:21:14.429861  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Checking permissions on dir: /home/jenkins
	I1205 19:21:14.429878  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Checking permissions on dir: /home
	I1205 19:21:14.429910  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Skipping /home - not owner
	I1205 19:21:14.430728  549077 main.go:141] libmachine: (ha-106302-m03) define libvirt domain using xml: 
	I1205 19:21:14.430737  549077 main.go:141] libmachine: (ha-106302-m03) <domain type='kvm'>
	I1205 19:21:14.430743  549077 main.go:141] libmachine: (ha-106302-m03)   <name>ha-106302-m03</name>
	I1205 19:21:14.430748  549077 main.go:141] libmachine: (ha-106302-m03)   <memory unit='MiB'>2200</memory>
	I1205 19:21:14.430753  549077 main.go:141] libmachine: (ha-106302-m03)   <vcpu>2</vcpu>
	I1205 19:21:14.430758  549077 main.go:141] libmachine: (ha-106302-m03)   <features>
	I1205 19:21:14.430762  549077 main.go:141] libmachine: (ha-106302-m03)     <acpi/>
	I1205 19:21:14.430769  549077 main.go:141] libmachine: (ha-106302-m03)     <apic/>
	I1205 19:21:14.430774  549077 main.go:141] libmachine: (ha-106302-m03)     <pae/>
	I1205 19:21:14.430778  549077 main.go:141] libmachine: (ha-106302-m03)     
	I1205 19:21:14.430783  549077 main.go:141] libmachine: (ha-106302-m03)   </features>
	I1205 19:21:14.430790  549077 main.go:141] libmachine: (ha-106302-m03)   <cpu mode='host-passthrough'>
	I1205 19:21:14.430795  549077 main.go:141] libmachine: (ha-106302-m03)   
	I1205 19:21:14.430801  549077 main.go:141] libmachine: (ha-106302-m03)   </cpu>
	I1205 19:21:14.430806  549077 main.go:141] libmachine: (ha-106302-m03)   <os>
	I1205 19:21:14.430811  549077 main.go:141] libmachine: (ha-106302-m03)     <type>hvm</type>
	I1205 19:21:14.430816  549077 main.go:141] libmachine: (ha-106302-m03)     <boot dev='cdrom'/>
	I1205 19:21:14.430823  549077 main.go:141] libmachine: (ha-106302-m03)     <boot dev='hd'/>
	I1205 19:21:14.430849  549077 main.go:141] libmachine: (ha-106302-m03)     <bootmenu enable='no'/>
	I1205 19:21:14.430873  549077 main.go:141] libmachine: (ha-106302-m03)   </os>
	I1205 19:21:14.430884  549077 main.go:141] libmachine: (ha-106302-m03)   <devices>
	I1205 19:21:14.430900  549077 main.go:141] libmachine: (ha-106302-m03)     <disk type='file' device='cdrom'>
	I1205 19:21:14.430917  549077 main.go:141] libmachine: (ha-106302-m03)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/boot2docker.iso'/>
	I1205 19:21:14.430928  549077 main.go:141] libmachine: (ha-106302-m03)       <target dev='hdc' bus='scsi'/>
	I1205 19:21:14.430936  549077 main.go:141] libmachine: (ha-106302-m03)       <readonly/>
	I1205 19:21:14.430944  549077 main.go:141] libmachine: (ha-106302-m03)     </disk>
	I1205 19:21:14.430951  549077 main.go:141] libmachine: (ha-106302-m03)     <disk type='file' device='disk'>
	I1205 19:21:14.430963  549077 main.go:141] libmachine: (ha-106302-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 19:21:14.431003  549077 main.go:141] libmachine: (ha-106302-m03)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/ha-106302-m03.rawdisk'/>
	I1205 19:21:14.431029  549077 main.go:141] libmachine: (ha-106302-m03)       <target dev='hda' bus='virtio'/>
	I1205 19:21:14.431041  549077 main.go:141] libmachine: (ha-106302-m03)     </disk>
	I1205 19:21:14.431052  549077 main.go:141] libmachine: (ha-106302-m03)     <interface type='network'>
	I1205 19:21:14.431065  549077 main.go:141] libmachine: (ha-106302-m03)       <source network='mk-ha-106302'/>
	I1205 19:21:14.431075  549077 main.go:141] libmachine: (ha-106302-m03)       <model type='virtio'/>
	I1205 19:21:14.431084  549077 main.go:141] libmachine: (ha-106302-m03)     </interface>
	I1205 19:21:14.431096  549077 main.go:141] libmachine: (ha-106302-m03)     <interface type='network'>
	I1205 19:21:14.431107  549077 main.go:141] libmachine: (ha-106302-m03)       <source network='default'/>
	I1205 19:21:14.431122  549077 main.go:141] libmachine: (ha-106302-m03)       <model type='virtio'/>
	I1205 19:21:14.431134  549077 main.go:141] libmachine: (ha-106302-m03)     </interface>
	I1205 19:21:14.431143  549077 main.go:141] libmachine: (ha-106302-m03)     <serial type='pty'>
	I1205 19:21:14.431151  549077 main.go:141] libmachine: (ha-106302-m03)       <target port='0'/>
	I1205 19:21:14.431161  549077 main.go:141] libmachine: (ha-106302-m03)     </serial>
	I1205 19:21:14.431168  549077 main.go:141] libmachine: (ha-106302-m03)     <console type='pty'>
	I1205 19:21:14.431178  549077 main.go:141] libmachine: (ha-106302-m03)       <target type='serial' port='0'/>
	I1205 19:21:14.431186  549077 main.go:141] libmachine: (ha-106302-m03)     </console>
	I1205 19:21:14.431201  549077 main.go:141] libmachine: (ha-106302-m03)     <rng model='virtio'>
	I1205 19:21:14.431213  549077 main.go:141] libmachine: (ha-106302-m03)       <backend model='random'>/dev/random</backend>
	I1205 19:21:14.431223  549077 main.go:141] libmachine: (ha-106302-m03)     </rng>
	I1205 19:21:14.431230  549077 main.go:141] libmachine: (ha-106302-m03)     
	I1205 19:21:14.431248  549077 main.go:141] libmachine: (ha-106302-m03)     
	I1205 19:21:14.431260  549077 main.go:141] libmachine: (ha-106302-m03)   </devices>
	I1205 19:21:14.431266  549077 main.go:141] libmachine: (ha-106302-m03) </domain>
	I1205 19:21:14.431276  549077 main.go:141] libmachine: (ha-106302-m03) 
	I1205 19:21:14.438494  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:19:ce:fd in network default
	I1205 19:21:14.439230  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:14.439249  549077 main.go:141] libmachine: (ha-106302-m03) Ensuring networks are active...
	I1205 19:21:14.440093  549077 main.go:141] libmachine: (ha-106302-m03) Ensuring network default is active
	I1205 19:21:14.440381  549077 main.go:141] libmachine: (ha-106302-m03) Ensuring network mk-ha-106302 is active
	I1205 19:21:14.440705  549077 main.go:141] libmachine: (ha-106302-m03) Getting domain xml...
	I1205 19:21:14.441404  549077 main.go:141] libmachine: (ha-106302-m03) Creating domain...
	I1205 19:21:15.693271  549077 main.go:141] libmachine: (ha-106302-m03) Waiting to get IP...
	I1205 19:21:15.694143  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:15.694577  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:15.694598  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:15.694548  549869 retry.go:31] will retry after 242.776885ms: waiting for machine to come up
	I1205 19:21:15.939062  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:15.939524  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:15.939551  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:15.939479  549869 retry.go:31] will retry after 378.968491ms: waiting for machine to come up
	I1205 19:21:16.320454  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:16.320979  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:16.321027  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:16.320939  549869 retry.go:31] will retry after 344.418245ms: waiting for machine to come up
	I1205 19:21:16.667478  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:16.667854  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:16.667886  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:16.667793  549869 retry.go:31] will retry after 423.913988ms: waiting for machine to come up
	I1205 19:21:17.093467  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:17.093883  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:17.093914  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:17.093826  549869 retry.go:31] will retry after 515.714654ms: waiting for machine to come up
	I1205 19:21:17.611140  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:17.611460  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:17.611485  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:17.611417  549869 retry.go:31] will retry after 696.033751ms: waiting for machine to come up
	I1205 19:21:18.308904  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:18.309411  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:18.309441  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:18.309369  549869 retry.go:31] will retry after 785.032938ms: waiting for machine to come up
	I1205 19:21:19.095780  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:19.096341  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:19.096368  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:19.096298  549869 retry.go:31] will retry after 896.435978ms: waiting for machine to come up
	I1205 19:21:19.994107  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:19.994555  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:19.994578  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:19.994515  549869 retry.go:31] will retry after 1.855664433s: waiting for machine to come up
	I1205 19:21:21.852199  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:21.852746  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:21.852782  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:21.852681  549869 retry.go:31] will retry after 1.846119751s: waiting for machine to come up
	I1205 19:21:23.701581  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:23.702157  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:23.702188  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:23.702108  549869 retry.go:31] will retry after 2.613135019s: waiting for machine to come up
	I1205 19:21:26.317749  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:26.318296  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:26.318317  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:26.318258  549869 retry.go:31] will retry after 3.299144229s: waiting for machine to come up
	I1205 19:21:29.618947  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:29.619445  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:29.619480  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:29.619393  549869 retry.go:31] will retry after 3.447245355s: waiting for machine to come up
	I1205 19:21:33.071166  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:33.071564  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find current IP address of domain ha-106302-m03 in network mk-ha-106302
	I1205 19:21:33.071595  549077 main.go:141] libmachine: (ha-106302-m03) DBG | I1205 19:21:33.071509  549869 retry.go:31] will retry after 3.459206484s: waiting for machine to come up
	I1205 19:21:36.533492  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.533999  549077 main.go:141] libmachine: (ha-106302-m03) Found IP for machine: 192.168.39.151
	I1205 19:21:36.534029  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has current primary IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.534063  549077 main.go:141] libmachine: (ha-106302-m03) Reserving static IP address...
	I1205 19:21:36.534590  549077 main.go:141] libmachine: (ha-106302-m03) DBG | unable to find host DHCP lease matching {name: "ha-106302-m03", mac: "52:54:00:e6:65:e2", ip: "192.168.39.151"} in network mk-ha-106302
	I1205 19:21:36.616736  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Getting to WaitForSSH function...
	I1205 19:21:36.616827  549077 main.go:141] libmachine: (ha-106302-m03) Reserved static IP address: 192.168.39.151
	I1205 19:21:36.616852  549077 main.go:141] libmachine: (ha-106302-m03) Waiting for SSH to be available...
	I1205 19:21:36.619362  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.620041  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:36.620071  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.620207  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Using SSH client type: external
	I1205 19:21:36.620243  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/id_rsa (-rw-------)
	I1205 19:21:36.620289  549077 main.go:141] libmachine: (ha-106302-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 19:21:36.620307  549077 main.go:141] libmachine: (ha-106302-m03) DBG | About to run SSH command:
	I1205 19:21:36.620323  549077 main.go:141] libmachine: (ha-106302-m03) DBG | exit 0
	I1205 19:21:36.748331  549077 main.go:141] libmachine: (ha-106302-m03) DBG | SSH cmd err, output: <nil>: 
	I1205 19:21:36.748638  549077 main.go:141] libmachine: (ha-106302-m03) KVM machine creation complete!
	I1205 19:21:36.748951  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetConfigRaw
	I1205 19:21:36.749696  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:36.749899  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:36.750158  549077 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 19:21:36.750177  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetState
	I1205 19:21:36.751459  549077 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 19:21:36.751496  549077 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 19:21:36.751505  549077 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 19:21:36.751516  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:36.753721  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.754147  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:36.754180  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.754321  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:36.754488  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:36.754635  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:36.754782  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:36.754931  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:21:36.755238  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.151 22 <nil> <nil>}
	I1205 19:21:36.755253  549077 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 19:21:36.859924  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:21:36.859961  549077 main.go:141] libmachine: Detecting the provisioner...
	I1205 19:21:36.859974  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:36.864316  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.864691  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:36.864716  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.864886  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:36.865081  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:36.865227  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:36.865363  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:36.865505  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:21:36.865742  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.151 22 <nil> <nil>}
	I1205 19:21:36.865757  549077 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 19:21:36.969493  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 19:21:36.969588  549077 main.go:141] libmachine: found compatible host: buildroot
	I1205 19:21:36.969602  549077 main.go:141] libmachine: Provisioning with buildroot...
	I1205 19:21:36.969613  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetMachineName
	I1205 19:21:36.969955  549077 buildroot.go:166] provisioning hostname "ha-106302-m03"
	I1205 19:21:36.969984  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetMachineName
	I1205 19:21:36.970178  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:36.972856  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.973248  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:36.973275  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:36.973447  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:36.973641  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:36.973807  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:36.973971  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:36.974182  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:21:36.974409  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.151 22 <nil> <nil>}
	I1205 19:21:36.974424  549077 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-106302-m03 && echo "ha-106302-m03" | sudo tee /etc/hostname
	I1205 19:21:37.091631  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-106302-m03
	
	I1205 19:21:37.091670  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:37.095049  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.095508  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.095538  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.095711  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:37.095892  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.096106  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.096340  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:37.096575  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:21:37.096743  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.151 22 <nil> <nil>}
	I1205 19:21:37.096759  549077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-106302-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-106302-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-106302-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:21:37.210648  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:21:37.210686  549077 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 19:21:37.210703  549077 buildroot.go:174] setting up certificates
	I1205 19:21:37.210719  549077 provision.go:84] configureAuth start
	I1205 19:21:37.210728  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetMachineName
	I1205 19:21:37.211084  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetIP
	I1205 19:21:37.214307  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.214777  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.214811  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.214993  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:37.217609  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.218026  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.218059  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.218357  549077 provision.go:143] copyHostCerts
	I1205 19:21:37.218397  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:21:37.218443  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 19:21:37.218457  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:21:37.218538  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 19:21:37.218640  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:21:37.218667  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 19:21:37.218672  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:21:37.218707  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 19:21:37.218773  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:21:37.218800  549077 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 19:21:37.218810  549077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:21:37.218844  549077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 19:21:37.218931  549077 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.ha-106302-m03 san=[127.0.0.1 192.168.39.151 ha-106302-m03 localhost minikube]
	I1205 19:21:37.343754  549077 provision.go:177] copyRemoteCerts
	I1205 19:21:37.343819  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:21:37.343847  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:37.346846  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.347219  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.347248  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.347438  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:37.347639  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.347948  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:37.348134  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/id_rsa Username:docker}
	I1205 19:21:37.432798  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 19:21:37.432880  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 19:21:37.459881  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 19:21:37.459950  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 19:21:37.486599  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 19:21:37.486685  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 19:21:37.511864  549077 provision.go:87] duration metric: took 301.129005ms to configureAuth
	I1205 19:21:37.511899  549077 buildroot.go:189] setting minikube options for container-runtime
	I1205 19:21:37.512151  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:21:37.512247  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:37.515413  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.515827  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.515873  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.516082  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:37.516362  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.516553  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.516696  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:37.516848  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:21:37.517021  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.151 22 <nil> <nil>}
	I1205 19:21:37.517041  549077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:21:37.766182  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:21:37.766214  549077 main.go:141] libmachine: Checking connection to Docker...
	I1205 19:21:37.766223  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetURL
	I1205 19:21:37.767491  549077 main.go:141] libmachine: (ha-106302-m03) DBG | Using libvirt version 6000000
	I1205 19:21:37.770234  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.770645  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.770683  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.770820  549077 main.go:141] libmachine: Docker is up and running!
	I1205 19:21:37.770836  549077 main.go:141] libmachine: Reticulating splines...
	I1205 19:21:37.770844  549077 client.go:171] duration metric: took 23.716534789s to LocalClient.Create
	I1205 19:21:37.770869  549077 start.go:167] duration metric: took 23.716613038s to libmachine.API.Create "ha-106302"
	I1205 19:21:37.770879  549077 start.go:293] postStartSetup for "ha-106302-m03" (driver="kvm2")
	I1205 19:21:37.770890  549077 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:21:37.770909  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:37.771260  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:21:37.771293  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:37.773751  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.774322  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.774351  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.774623  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:37.774898  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.775132  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:37.775318  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/id_rsa Username:docker}
	I1205 19:21:37.864963  549077 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:21:37.869224  549077 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 19:21:37.869250  549077 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 19:21:37.869346  549077 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 19:21:37.869450  549077 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 19:21:37.869464  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /etc/ssl/certs/5381862.pem
	I1205 19:21:37.869572  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 19:21:37.878920  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:21:37.904695  549077 start.go:296] duration metric: took 133.797994ms for postStartSetup
	I1205 19:21:37.904759  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetConfigRaw
	I1205 19:21:37.905447  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetIP
	I1205 19:21:37.908301  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.908672  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.908702  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.908956  549077 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:21:37.909156  549077 start.go:128] duration metric: took 23.874183503s to createHost
	I1205 19:21:37.909187  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:37.911450  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.911786  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:37.911820  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:37.911891  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:37.912073  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.912217  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:37.912383  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:37.912551  549077 main.go:141] libmachine: Using SSH client type: native
	I1205 19:21:37.912721  549077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.151 22 <nil> <nil>}
	I1205 19:21:37.912731  549077 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 19:21:38.013720  549077 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733426497.965708253
	
	I1205 19:21:38.013754  549077 fix.go:216] guest clock: 1733426497.965708253
	I1205 19:21:38.013766  549077 fix.go:229] Guest: 2024-12-05 19:21:37.965708253 +0000 UTC Remote: 2024-12-05 19:21:37.909171964 +0000 UTC m=+152.282908362 (delta=56.536289ms)
	I1205 19:21:38.013790  549077 fix.go:200] guest clock delta is within tolerance: 56.536289ms
	I1205 19:21:38.013799  549077 start.go:83] releasing machines lock for "ha-106302-m03", held for 23.978946471s
	I1205 19:21:38.013827  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:38.014134  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetIP
	I1205 19:21:38.016789  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:38.017218  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:38.017243  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:38.019529  549077 out.go:177] * Found network options:
	I1205 19:21:38.020846  549077 out.go:177]   - NO_PROXY=192.168.39.185,192.168.39.22
	W1205 19:21:38.022010  549077 proxy.go:119] fail to check proxy env: Error ip not in block
	W1205 19:21:38.022031  549077 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 19:21:38.022044  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:38.022565  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:38.022780  549077 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:21:38.022889  549077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:21:38.022930  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	W1205 19:21:38.022997  549077 proxy.go:119] fail to check proxy env: Error ip not in block
	W1205 19:21:38.023035  549077 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 19:21:38.023141  549077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:21:38.023159  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:21:38.025672  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:38.025960  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:38.026079  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:38.026109  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:38.026225  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:38.026344  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:38.026368  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:38.026432  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:38.026548  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:21:38.026555  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:38.026676  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:21:38.026727  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/id_rsa Username:docker}
	I1205 19:21:38.026820  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:21:38.026963  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/id_rsa Username:docker}
	I1205 19:21:38.262374  549077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 19:21:38.269119  549077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 19:21:38.269192  549077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:21:38.288736  549077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 19:21:38.288773  549077 start.go:495] detecting cgroup driver to use...
	I1205 19:21:38.288918  549077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:21:38.308145  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:21:38.324419  549077 docker.go:217] disabling cri-docker service (if available) ...
	I1205 19:21:38.324486  549077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:21:38.340495  549077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:21:38.356196  549077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:21:38.499051  549077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:21:38.664170  549077 docker.go:233] disabling docker service ...
	I1205 19:21:38.664261  549077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:21:38.679720  549077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:21:38.693887  549077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:21:38.835246  549077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:21:38.967777  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:21:38.984739  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:21:39.005139  549077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 19:21:39.005219  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:21:39.018668  549077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:21:39.018748  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:21:39.030582  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:21:39.042783  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:21:39.055956  549077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:21:39.068121  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:21:39.079421  549077 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:21:39.099262  549077 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:21:39.112188  549077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:21:39.123835  549077 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 19:21:39.123897  549077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 19:21:39.142980  549077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:21:39.158784  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:21:39.282396  549077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:21:39.381886  549077 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:21:39.381979  549077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:21:39.387103  549077 start.go:563] Will wait 60s for crictl version
	I1205 19:21:39.387165  549077 ssh_runner.go:195] Run: which crictl
	I1205 19:21:39.391338  549077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:21:39.433516  549077 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 19:21:39.433618  549077 ssh_runner.go:195] Run: crio --version
	I1205 19:21:39.463442  549077 ssh_runner.go:195] Run: crio --version
	I1205 19:21:39.493740  549077 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 19:21:39.495019  549077 out.go:177]   - env NO_PROXY=192.168.39.185
	I1205 19:21:39.496240  549077 out.go:177]   - env NO_PROXY=192.168.39.185,192.168.39.22
	I1205 19:21:39.497508  549077 main.go:141] libmachine: (ha-106302-m03) Calling .GetIP
	I1205 19:21:39.500359  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:39.500726  549077 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:21:39.500755  549077 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:21:39.500911  549077 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 19:21:39.505557  549077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:21:39.519317  549077 mustload.go:65] Loading cluster: ha-106302
	I1205 19:21:39.519614  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:21:39.519880  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:21:39.519923  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:21:39.535653  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45847
	I1205 19:21:39.536186  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:21:39.536801  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:21:39.536826  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:21:39.537227  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:21:39.537444  549077 main.go:141] libmachine: (ha-106302) Calling .GetState
	I1205 19:21:39.538986  549077 host.go:66] Checking if "ha-106302" exists ...
	I1205 19:21:39.539332  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:21:39.539371  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:21:39.555429  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40715
	I1205 19:21:39.555999  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:21:39.556560  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:21:39.556589  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:21:39.556932  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:21:39.557156  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:21:39.557335  549077 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302 for IP: 192.168.39.151
	I1205 19:21:39.557356  549077 certs.go:194] generating shared ca certs ...
	I1205 19:21:39.557390  549077 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:21:39.557557  549077 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 19:21:39.557617  549077 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 19:21:39.557630  549077 certs.go:256] generating profile certs ...
	I1205 19:21:39.557734  549077 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key
	I1205 19:21:39.557771  549077 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.2331ea85
	I1205 19:21:39.557795  549077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.2331ea85 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.185 192.168.39.22 192.168.39.151 192.168.39.254]
	I1205 19:21:39.646088  549077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.2331ea85 ...
	I1205 19:21:39.646122  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.2331ea85: {Name:mkca6986931a87aa8d4bcffb8b1ac6412a83db65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:21:39.646289  549077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.2331ea85 ...
	I1205 19:21:39.646301  549077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.2331ea85: {Name:mke7f657c575646b15413aa5e5525c127a73d588 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:21:39.646374  549077 certs.go:381] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.2331ea85 -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt
	I1205 19:21:39.646516  549077 certs.go:385] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.2331ea85 -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key
	I1205 19:21:39.646682  549077 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key
	I1205 19:21:39.646703  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 19:21:39.646737  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 19:21:39.646758  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 19:21:39.646775  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 19:21:39.646792  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 19:21:39.646808  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 19:21:39.646827  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 19:21:39.660323  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 19:21:39.660454  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 19:21:39.660507  549077 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 19:21:39.660523  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 19:21:39.660561  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 19:21:39.660595  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:21:39.660628  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 19:21:39.660684  549077 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:21:39.660725  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem -> /usr/share/ca-certificates/538186.pem
	I1205 19:21:39.660748  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /usr/share/ca-certificates/5381862.pem
	I1205 19:21:39.660768  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:21:39.660816  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:21:39.664340  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:21:39.664849  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:21:39.664879  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:21:39.665165  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:21:39.665411  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:21:39.665607  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:21:39.665765  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:21:39.748651  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1205 19:21:39.754014  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1205 19:21:39.766062  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1205 19:21:39.771674  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1205 19:21:39.784618  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1205 19:21:39.789041  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1205 19:21:39.802785  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1205 19:21:39.808595  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1205 19:21:39.822597  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1205 19:21:39.827169  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1205 19:21:39.839924  549077 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1205 19:21:39.844630  549077 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1205 19:21:39.865166  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:21:39.890669  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 19:21:39.914805  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:21:39.938866  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 19:21:39.964041  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1205 19:21:39.989973  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 19:21:40.017414  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:21:40.042496  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 19:21:40.067448  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 19:21:40.092444  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 19:21:40.118324  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:21:40.144679  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1205 19:21:40.162124  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1205 19:21:40.178895  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1205 19:21:40.196614  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1205 19:21:40.216743  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1205 19:21:40.236796  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1205 19:21:40.255368  549077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1205 19:21:40.272767  549077 ssh_runner.go:195] Run: openssl version
	I1205 19:21:40.279013  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 19:21:40.291865  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 19:21:40.297901  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 19:21:40.297969  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 19:21:40.305022  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 19:21:40.317671  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 19:21:40.330059  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 19:21:40.335215  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 19:21:40.335291  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 19:21:40.341648  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 19:21:40.353809  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:21:40.366241  549077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:21:40.371103  549077 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:21:40.371178  549077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:21:40.377410  549077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:21:40.389484  549077 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 19:21:40.394089  549077 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 19:21:40.394159  549077 kubeadm.go:934] updating node {m03 192.168.39.151 8443 v1.31.2 crio true true} ...
	I1205 19:21:40.394281  549077 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-106302-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 19:21:40.394312  549077 kube-vip.go:115] generating kube-vip config ...
	I1205 19:21:40.394383  549077 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 19:21:40.412017  549077 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 19:21:40.412099  549077 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1205 19:21:40.412152  549077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 19:21:40.422903  549077 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1205 19:21:40.422982  549077 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1205 19:21:40.433537  549077 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1205 19:21:40.433551  549077 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1205 19:21:40.433572  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 19:21:40.433606  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:21:40.433603  549077 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1205 19:21:40.433634  549077 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 19:21:40.433638  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 19:21:40.433701  549077 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 19:21:40.452070  549077 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1205 19:21:40.452102  549077 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 19:21:40.452118  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1205 19:21:40.452167  549077 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1205 19:21:40.452196  549077 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 19:21:40.452198  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1205 19:21:40.481457  549077 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1205 19:21:40.481500  549077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1205 19:21:41.411979  549077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1205 19:21:41.422976  549077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1205 19:21:41.442199  549077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 19:21:41.460832  549077 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1205 19:21:41.479070  549077 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 19:21:41.483375  549077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:21:41.497066  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:21:41.622952  549077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:21:41.643215  549077 host.go:66] Checking if "ha-106302" exists ...
	I1205 19:21:41.643585  549077 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:21:41.643643  549077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:21:41.660142  549077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39403
	I1205 19:21:41.660811  549077 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:21:41.661472  549077 main.go:141] libmachine: Using API Version  1
	I1205 19:21:41.661507  549077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:21:41.661908  549077 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:21:41.662156  549077 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:21:41.663022  549077 start.go:317] joinCluster: &{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:21:41.663207  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1205 19:21:41.663239  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:21:41.666973  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:21:41.667413  549077 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:21:41.667445  549077 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:21:41.667629  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:21:41.667805  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:21:41.667958  549077 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:21:41.668092  549077 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:21:41.845827  549077 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:21:41.845894  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bitrl5.l9o7pcy69k2x0m8f --discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-106302-m03 --control-plane --apiserver-advertise-address=192.168.39.151 --apiserver-bind-port=8443"
	I1205 19:22:05.091694  549077 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bitrl5.l9o7pcy69k2x0m8f --discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-106302-m03 --control-plane --apiserver-advertise-address=192.168.39.151 --apiserver-bind-port=8443": (23.245742289s)
	I1205 19:22:05.091745  549077 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1205 19:22:05.651069  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-106302-m03 minikube.k8s.io/updated_at=2024_12_05T19_22_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=ha-106302 minikube.k8s.io/primary=false
	I1205 19:22:05.805746  549077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-106302-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1205 19:22:05.942387  549077 start.go:319] duration metric: took 24.279360239s to joinCluster
	I1205 19:22:05.942527  549077 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:22:05.942909  549077 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:22:05.943936  549077 out.go:177] * Verifying Kubernetes components...
	I1205 19:22:05.945223  549077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:22:06.284991  549077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:22:06.343812  549077 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:22:06.344263  549077 kapi.go:59] client config for ha-106302: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.crt", KeyFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key", CAFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1205 19:22:06.344398  549077 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.185:8443
	I1205 19:22:06.344797  549077 node_ready.go:35] waiting up to 6m0s for node "ha-106302-m03" to be "Ready" ...
	I1205 19:22:06.344937  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:06.344951  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:06.344962  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:06.344969  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:06.358416  549077 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1205 19:22:06.845609  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:06.845637  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:06.845650  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:06.845657  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:06.850140  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:07.345201  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:07.345229  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:07.345238  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:07.345242  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:07.349137  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:07.845591  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:07.845615  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:07.845624  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:07.845628  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:07.849417  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:08.345109  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:08.345139  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:08.345151  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:08.345155  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:08.349617  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:08.350266  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:08.845598  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:08.845626  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:08.845638  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:08.845643  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:08.849144  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:09.345621  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:09.345646  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:09.345656  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:09.345660  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:09.349983  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:09.845757  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:09.845782  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:09.845790  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:09.845794  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:09.849681  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:10.345604  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:10.345635  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:10.345648  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:10.345654  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:10.349727  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:10.350478  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:10.845342  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:10.845367  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:10.845376  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:10.845381  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:10.848990  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:11.346073  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:11.346097  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:11.346105  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:11.346109  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:11.350613  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:11.845378  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:11.845411  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:11.845426  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:11.845434  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:11.849253  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:12.345303  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:12.345337  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:12.345349  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:12.345358  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:12.352355  549077 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 19:22:12.353182  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:12.845552  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:12.845581  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:12.845591  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:12.845595  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:12.849732  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:13.345587  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:13.345613  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:13.345623  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:13.345629  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:13.349259  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:13.845165  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:13.845197  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:13.845209  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:13.845214  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:13.849815  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:14.345423  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:14.345458  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:14.345471  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:14.345480  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:14.353042  549077 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1205 19:22:14.353960  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:14.845215  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:14.845239  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:14.845248  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:14.845252  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:14.848681  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:15.345651  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:15.345681  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:15.345699  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:15.345706  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:15.349604  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:15.845599  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:15.845627  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:15.845637  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:15.845641  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:15.849736  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:16.345974  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:16.346003  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:16.346012  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:16.346017  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:16.350399  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:16.845026  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:16.845057  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:16.845067  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:16.845071  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:16.848713  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:16.849459  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:17.345612  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:17.345660  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:17.345688  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:17.345700  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:17.349461  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:17.845355  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:17.845379  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:17.845388  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:17.845392  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:17.851232  549077 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 19:22:18.346074  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:18.346098  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:18.346107  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:18.346112  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:18.350327  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:18.845241  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:18.845266  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:18.845273  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:18.845277  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:18.848579  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:18.849652  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:19.345480  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:19.345506  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:19.345515  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:19.345519  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:19.349757  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:19.845572  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:19.845597  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:19.845606  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:19.845621  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:19.849116  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:20.345089  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:20.345113  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:20.345121  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:20.345126  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:20.348890  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:20.846039  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:20.846062  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:20.846070  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:20.846075  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:20.850247  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:20.850972  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:21.345329  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:21.345370  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:21.345381  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:21.345387  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:21.349225  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:21.845571  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:21.845604  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:21.845616  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:21.845622  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:21.849183  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:22.345428  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:22.345453  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:22.345461  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:22.345466  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:22.349371  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:22.845510  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:22.845534  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:22.845543  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:22.845549  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:22.849220  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:23.345442  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:23.345470  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:23.345479  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:23.345484  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:23.349347  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:23.350300  549077 node_ready.go:53] node "ha-106302-m03" has status "Ready":"False"
	I1205 19:22:23.845549  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:23.845574  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:23.845582  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:23.845587  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:23.849893  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:24.345261  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:24.345292  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:24.345302  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:24.345306  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:24.349136  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:24.845545  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:24.845574  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:24.845583  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:24.845586  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:24.849619  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:25.345655  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:25.345687  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.345745  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.345781  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.349427  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:25.350218  549077 node_ready.go:49] node "ha-106302-m03" has status "Ready":"True"
	I1205 19:22:25.350237  549077 node_ready.go:38] duration metric: took 19.005417749s for node "ha-106302-m03" to be "Ready" ...
	I1205 19:22:25.350247  549077 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:22:25.350324  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:22:25.350335  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.350342  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.350347  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.358969  549077 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1205 19:22:25.365676  549077 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-45m77" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.365768  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-45m77
	I1205 19:22:25.365777  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.365785  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.365790  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.369626  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:25.370252  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:25.370268  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.370276  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.370280  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.373604  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:25.374401  549077 pod_ready.go:93] pod "coredns-7c65d6cfc9-45m77" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:25.374417  549077 pod_ready.go:82] duration metric: took 8.712508ms for pod "coredns-7c65d6cfc9-45m77" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.374426  549077 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sjsv2" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.374491  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sjsv2
	I1205 19:22:25.374498  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.374505  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.374510  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.377314  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:22:25.378099  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:25.378115  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.378125  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.378130  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.380745  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:22:25.381330  549077 pod_ready.go:93] pod "coredns-7c65d6cfc9-sjsv2" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:25.381354  549077 pod_ready.go:82] duration metric: took 6.920357ms for pod "coredns-7c65d6cfc9-sjsv2" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.381366  549077 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.381430  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/etcd-ha-106302
	I1205 19:22:25.381437  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.381445  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.381452  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.384565  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:25.385119  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:25.385140  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.385150  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.385156  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.387832  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:22:25.388313  549077 pod_ready.go:93] pod "etcd-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:25.388334  549077 pod_ready.go:82] duration metric: took 6.95931ms for pod "etcd-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.388344  549077 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.388405  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/etcd-ha-106302-m02
	I1205 19:22:25.388413  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.388420  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.388426  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.390958  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:22:25.391627  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:25.391646  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.391657  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.391664  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.394336  549077 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 19:22:25.394843  549077 pod_ready.go:93] pod "etcd-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:25.394860  549077 pod_ready.go:82] duration metric: took 6.510348ms for pod "etcd-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.394870  549077 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.546322  549077 request.go:632] Waited for 151.362843ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/etcd-ha-106302-m03
	I1205 19:22:25.546441  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/etcd-ha-106302-m03
	I1205 19:22:25.546457  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.546468  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.546478  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.551505  549077 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 19:22:25.746379  549077 request.go:632] Waited for 194.045637ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:25.746447  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:25.746452  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.746460  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.746465  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.749940  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:25.750364  549077 pod_ready.go:93] pod "etcd-ha-106302-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:25.750384  549077 pod_ready.go:82] duration metric: took 355.50711ms for pod "etcd-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.750410  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:25.945946  549077 request.go:632] Waited for 195.44547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302
	I1205 19:22:25.946012  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302
	I1205 19:22:25.946017  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:25.946026  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:25.946031  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:25.949896  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:26.146187  549077 request.go:632] Waited for 195.303913ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:26.146261  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:26.146266  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:26.146281  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:26.146284  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:26.150155  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:26.150850  549077 pod_ready.go:93] pod "kube-apiserver-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:26.150872  549077 pod_ready.go:82] duration metric: took 400.452175ms for pod "kube-apiserver-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:26.150884  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:26.346018  549077 request.go:632] Waited for 195.032626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302-m02
	I1205 19:22:26.346106  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302-m02
	I1205 19:22:26.346114  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:26.346126  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:26.346134  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:26.350215  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:26.546617  549077 request.go:632] Waited for 195.375501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:26.546704  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:26.546710  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:26.546718  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:26.546722  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:26.550695  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:26.551267  549077 pod_ready.go:93] pod "kube-apiserver-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:26.551288  549077 pod_ready.go:82] duration metric: took 400.395912ms for pod "kube-apiserver-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:26.551301  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:26.746009  549077 request.go:632] Waited for 194.599498ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302-m03
	I1205 19:22:26.746081  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-106302-m03
	I1205 19:22:26.746088  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:26.746096  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:26.746102  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:26.750448  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:26.945801  549077 request.go:632] Waited for 194.318273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:26.945876  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:26.945882  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:26.945893  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:26.945901  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:26.949211  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:26.949781  549077 pod_ready.go:93] pod "kube-apiserver-ha-106302-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:26.949807  549077 pod_ready.go:82] duration metric: took 398.493465ms for pod "kube-apiserver-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:26.949821  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:27.145762  549077 request.go:632] Waited for 195.843082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302
	I1205 19:22:27.145841  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302
	I1205 19:22:27.145847  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:27.145856  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:27.145863  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:27.150825  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:27.346689  549077 request.go:632] Waited for 195.243035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:27.346772  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:27.346785  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:27.346804  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:27.346815  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:27.350485  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:27.351090  549077 pod_ready.go:93] pod "kube-controller-manager-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:27.351111  549077 pod_ready.go:82] duration metric: took 401.282274ms for pod "kube-controller-manager-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:27.351122  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:27.546113  549077 request.go:632] Waited for 194.908111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302-m02
	I1205 19:22:27.546216  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302-m02
	I1205 19:22:27.546228  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:27.546241  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:27.546255  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:27.550360  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:27.746526  549077 request.go:632] Waited for 195.360331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:27.746617  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:27.746626  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:27.746635  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:27.746640  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:27.753462  549077 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 19:22:27.754708  549077 pod_ready.go:93] pod "kube-controller-manager-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:27.754735  549077 pod_ready.go:82] duration metric: took 403.601936ms for pod "kube-controller-manager-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:27.754750  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:27.945674  549077 request.go:632] Waited for 190.826423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302-m03
	I1205 19:22:27.945746  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-106302-m03
	I1205 19:22:27.945752  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:27.945760  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:27.945764  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:27.949668  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:28.146444  549077 request.go:632] Waited for 195.387763ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:28.146510  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:28.146515  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:28.146523  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:28.146535  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:28.150750  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:28.151357  549077 pod_ready.go:93] pod "kube-controller-manager-ha-106302-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:28.151381  549077 pod_ready.go:82] duration metric: took 396.622007ms for pod "kube-controller-manager-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:28.151393  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n57lf" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:28.345948  549077 request.go:632] Waited for 194.471828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n57lf
	I1205 19:22:28.346043  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n57lf
	I1205 19:22:28.346051  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:28.346059  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:28.346064  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:28.350114  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:28.546260  549077 request.go:632] Waited for 195.407825ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:28.546369  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:28.546382  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:28.546394  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:28.546413  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:28.551000  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:28.551628  549077 pod_ready.go:93] pod "kube-proxy-n57lf" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:28.551654  549077 pod_ready.go:82] duration metric: took 400.254319ms for pod "kube-proxy-n57lf" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:28.551666  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pghdx" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:28.746587  549077 request.go:632] Waited for 194.82213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pghdx
	I1205 19:22:28.746705  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pghdx
	I1205 19:22:28.746718  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:28.746727  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:28.746737  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:28.750453  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:28.946581  549077 request.go:632] Waited for 195.373436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:28.946682  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:28.946693  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:28.946704  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:28.946714  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:28.949892  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:28.950341  549077 pod_ready.go:93] pod "kube-proxy-pghdx" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:28.950360  549077 pod_ready.go:82] duration metric: took 398.68655ms for pod "kube-proxy-pghdx" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:28.950370  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zw6nj" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:29.145964  549077 request.go:632] Waited for 195.515335ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zw6nj
	I1205 19:22:29.146035  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zw6nj
	I1205 19:22:29.146042  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:29.146052  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:29.146058  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:29.149161  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:29.346356  549077 request.go:632] Waited for 196.408917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:29.346467  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:29.346475  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:29.346505  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:29.346577  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:29.350334  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:29.351251  549077 pod_ready.go:93] pod "kube-proxy-zw6nj" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:29.351290  549077 pod_ready.go:82] duration metric: took 400.913186ms for pod "kube-proxy-zw6nj" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:29.351307  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:29.545602  549077 request.go:632] Waited for 194.210598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302
	I1205 19:22:29.545674  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302
	I1205 19:22:29.545682  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:29.545694  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:29.545705  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:29.549980  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:29.746034  549077 request.go:632] Waited for 195.473431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:29.746121  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302
	I1205 19:22:29.746128  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:29.746140  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:29.746148  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:29.750509  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:29.751460  549077 pod_ready.go:93] pod "kube-scheduler-ha-106302" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:29.751481  549077 pod_ready.go:82] duration metric: took 400.162109ms for pod "kube-scheduler-ha-106302" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:29.751493  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:29.946019  549077 request.go:632] Waited for 194.44438ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302-m02
	I1205 19:22:29.946119  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302-m02
	I1205 19:22:29.946131  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:29.946140  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:29.946148  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:29.949224  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:30.146466  549077 request.go:632] Waited for 196.38785ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:30.146542  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m02
	I1205 19:22:30.146550  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:30.146562  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:30.146575  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:30.150163  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:30.150654  549077 pod_ready.go:93] pod "kube-scheduler-ha-106302-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:30.150677  549077 pod_ready.go:82] duration metric: took 399.174639ms for pod "kube-scheduler-ha-106302-m02" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:30.150688  549077 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:30.346682  549077 request.go:632] Waited for 195.915039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302-m03
	I1205 19:22:30.346759  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-106302-m03
	I1205 19:22:30.346764  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:30.346773  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:30.346788  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:30.350596  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:30.545763  549077 request.go:632] Waited for 194.297931ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:30.545847  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes/ha-106302-m03
	I1205 19:22:30.545854  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:30.545865  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:30.545873  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:30.549623  549077 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 19:22:30.550473  549077 pod_ready.go:93] pod "kube-scheduler-ha-106302-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 19:22:30.550494  549077 pod_ready.go:82] duration metric: took 399.800176ms for pod "kube-scheduler-ha-106302-m03" in "kube-system" namespace to be "Ready" ...
	I1205 19:22:30.550505  549077 pod_ready.go:39] duration metric: took 5.200248716s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:22:30.550539  549077 api_server.go:52] waiting for apiserver process to appear ...
	I1205 19:22:30.550598  549077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 19:22:30.565872  549077 api_server.go:72] duration metric: took 24.623303746s to wait for apiserver process to appear ...
	I1205 19:22:30.565908  549077 api_server.go:88] waiting for apiserver healthz status ...
	I1205 19:22:30.565931  549077 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I1205 19:22:30.570332  549077 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I1205 19:22:30.570415  549077 round_trippers.go:463] GET https://192.168.39.185:8443/version
	I1205 19:22:30.570426  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:30.570440  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:30.570444  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:30.571545  549077 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 19:22:30.571615  549077 api_server.go:141] control plane version: v1.31.2
	I1205 19:22:30.571635  549077 api_server.go:131] duration metric: took 5.719204ms to wait for apiserver health ...
	I1205 19:22:30.571664  549077 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 19:22:30.746133  549077 request.go:632] Waited for 174.37713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:22:30.746217  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:22:30.746231  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:30.746244  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:30.746251  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:30.753131  549077 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 19:22:30.760159  549077 system_pods.go:59] 24 kube-system pods found
	I1205 19:22:30.760194  549077 system_pods.go:61] "coredns-7c65d6cfc9-45m77" [88196078-5292-43dc-84b2-dc53af435e5c] Running
	I1205 19:22:30.760202  549077 system_pods.go:61] "coredns-7c65d6cfc9-sjsv2" [b686cbc5-1b4f-44ea-89cb-70063b687718] Running
	I1205 19:22:30.760208  549077 system_pods.go:61] "etcd-ha-106302" [b0c81234-5186-4812-a1a2-4f035f9efabf] Running
	I1205 19:22:30.760214  549077 system_pods.go:61] "etcd-ha-106302-m02" [8c619411-697a-4eb0-8725-27811a17aba1] Running
	I1205 19:22:30.760219  549077 system_pods.go:61] "etcd-ha-106302-m03" [08e9ef91-8e16-4ff1-a2df-8275e72a5697] Running
	I1205 19:22:30.760224  549077 system_pods.go:61] "kindnet-thcsp" [e2eec41c-3ca9-42ff-801d-dfdf05f6eab2] Running
	I1205 19:22:30.760228  549077 system_pods.go:61] "kindnet-wdsv9" [83d82f5d-42c3-47be-af20-41b82c16b114] Running
	I1205 19:22:30.760233  549077 system_pods.go:61] "kindnet-xr9mh" [2044800c-f517-439e-810b-71a114cb044e] Running
	I1205 19:22:30.760238  549077 system_pods.go:61] "kube-apiserver-ha-106302" [688ddac9-2f42-4e6b-b9e8-a9c967a7180b] Running
	I1205 19:22:30.760243  549077 system_pods.go:61] "kube-apiserver-ha-106302-m02" [ad05d27e-72e0-443e-8ad3-2d464c116f27] Running
	I1205 19:22:30.760249  549077 system_pods.go:61] "kube-apiserver-ha-106302-m03" [398242aa-f015-47ca-9132-23412c52878d] Running
	I1205 19:22:30.760254  549077 system_pods.go:61] "kube-controller-manager-ha-106302" [e63c5a4d-c327-4040-b679-62b5b06abec9] Running
	I1205 19:22:30.760259  549077 system_pods.go:61] "kube-controller-manager-ha-106302-m02" [fe707148-d0c6-4de3-841f-3a8143fa9217] Running
	I1205 19:22:30.760288  549077 system_pods.go:61] "kube-controller-manager-ha-106302-m03" [8af17291-c1b7-417f-a2dd-5a00ca58b07e] Running
	I1205 19:22:30.760294  549077 system_pods.go:61] "kube-proxy-n57lf" [94819792-89fc-4a70-a54f-02e594b657bf] Running
	I1205 19:22:30.760300  549077 system_pods.go:61] "kube-proxy-pghdx" [915060a3-353c-4a2c-a9d6-494206776446] Running
	I1205 19:22:30.760306  549077 system_pods.go:61] "kube-proxy-zw6nj" [d35e1426-9151-4eb3-95fd-c2b36c126b51] Running
	I1205 19:22:30.760312  549077 system_pods.go:61] "kube-scheduler-ha-106302" [6dd32258-0ba3-4f79-8d4b-165b918bbc36] Running
	I1205 19:22:30.760321  549077 system_pods.go:61] "kube-scheduler-ha-106302-m02" [b94b6bf9-4639-47d1-92be-0cbba44e65f3] Running
	I1205 19:22:30.760327  549077 system_pods.go:61] "kube-scheduler-ha-106302-m03" [1b601e0c-59c7-4248-b29c-44d19934f590] Running
	I1205 19:22:30.760333  549077 system_pods.go:61] "kube-vip-ha-106302" [03b99453-c78d-4aaf-93e8-7011ae363db4] Running
	I1205 19:22:30.760339  549077 system_pods.go:61] "kube-vip-ha-106302-m02" [2ec94818-bc15-4d60-95b4-e7f7235f0341] Running
	I1205 19:22:30.760347  549077 system_pods.go:61] "kube-vip-ha-106302-m03" [6e511769-148e-43eb-a4bb-6dd72dfcd11d] Running
	I1205 19:22:30.760352  549077 system_pods.go:61] "storage-provisioner" [88d6e224-b304-4f84-a162-9803400c9acf] Running
	I1205 19:22:30.760361  549077 system_pods.go:74] duration metric: took 188.685514ms to wait for pod list to return data ...
	I1205 19:22:30.760375  549077 default_sa.go:34] waiting for default service account to be created ...
	I1205 19:22:30.946070  549077 request.go:632] Waited for 185.595824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/default/serviceaccounts
	I1205 19:22:30.946137  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/default/serviceaccounts
	I1205 19:22:30.946142  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:30.946151  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:30.946159  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:30.950732  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:30.950901  549077 default_sa.go:45] found service account: "default"
	I1205 19:22:30.950919  549077 default_sa.go:55] duration metric: took 190.53748ms for default service account to be created ...
	I1205 19:22:30.950929  549077 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 19:22:31.146374  549077 request.go:632] Waited for 195.332956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:22:31.146437  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/namespaces/kube-system/pods
	I1205 19:22:31.146443  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:31.146451  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:31.146456  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:31.153763  549077 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1205 19:22:31.160825  549077 system_pods.go:86] 24 kube-system pods found
	I1205 19:22:31.160858  549077 system_pods.go:89] "coredns-7c65d6cfc9-45m77" [88196078-5292-43dc-84b2-dc53af435e5c] Running
	I1205 19:22:31.160865  549077 system_pods.go:89] "coredns-7c65d6cfc9-sjsv2" [b686cbc5-1b4f-44ea-89cb-70063b687718] Running
	I1205 19:22:31.160869  549077 system_pods.go:89] "etcd-ha-106302" [b0c81234-5186-4812-a1a2-4f035f9efabf] Running
	I1205 19:22:31.160874  549077 system_pods.go:89] "etcd-ha-106302-m02" [8c619411-697a-4eb0-8725-27811a17aba1] Running
	I1205 19:22:31.160878  549077 system_pods.go:89] "etcd-ha-106302-m03" [08e9ef91-8e16-4ff1-a2df-8275e72a5697] Running
	I1205 19:22:31.160882  549077 system_pods.go:89] "kindnet-thcsp" [e2eec41c-3ca9-42ff-801d-dfdf05f6eab2] Running
	I1205 19:22:31.160888  549077 system_pods.go:89] "kindnet-wdsv9" [83d82f5d-42c3-47be-af20-41b82c16b114] Running
	I1205 19:22:31.160893  549077 system_pods.go:89] "kindnet-xr9mh" [2044800c-f517-439e-810b-71a114cb044e] Running
	I1205 19:22:31.160900  549077 system_pods.go:89] "kube-apiserver-ha-106302" [688ddac9-2f42-4e6b-b9e8-a9c967a7180b] Running
	I1205 19:22:31.160908  549077 system_pods.go:89] "kube-apiserver-ha-106302-m02" [ad05d27e-72e0-443e-8ad3-2d464c116f27] Running
	I1205 19:22:31.160914  549077 system_pods.go:89] "kube-apiserver-ha-106302-m03" [398242aa-f015-47ca-9132-23412c52878d] Running
	I1205 19:22:31.160925  549077 system_pods.go:89] "kube-controller-manager-ha-106302" [e63c5a4d-c327-4040-b679-62b5b06abec9] Running
	I1205 19:22:31.160931  549077 system_pods.go:89] "kube-controller-manager-ha-106302-m02" [fe707148-d0c6-4de3-841f-3a8143fa9217] Running
	I1205 19:22:31.160937  549077 system_pods.go:89] "kube-controller-manager-ha-106302-m03" [8af17291-c1b7-417f-a2dd-5a00ca58b07e] Running
	I1205 19:22:31.160946  549077 system_pods.go:89] "kube-proxy-n57lf" [94819792-89fc-4a70-a54f-02e594b657bf] Running
	I1205 19:22:31.160950  549077 system_pods.go:89] "kube-proxy-pghdx" [915060a3-353c-4a2c-a9d6-494206776446] Running
	I1205 19:22:31.160956  549077 system_pods.go:89] "kube-proxy-zw6nj" [d35e1426-9151-4eb3-95fd-c2b36c126b51] Running
	I1205 19:22:31.160960  549077 system_pods.go:89] "kube-scheduler-ha-106302" [6dd32258-0ba3-4f79-8d4b-165b918bbc36] Running
	I1205 19:22:31.160970  549077 system_pods.go:89] "kube-scheduler-ha-106302-m02" [b94b6bf9-4639-47d1-92be-0cbba44e65f3] Running
	I1205 19:22:31.160976  549077 system_pods.go:89] "kube-scheduler-ha-106302-m03" [1b601e0c-59c7-4248-b29c-44d19934f590] Running
	I1205 19:22:31.160979  549077 system_pods.go:89] "kube-vip-ha-106302" [03b99453-c78d-4aaf-93e8-7011ae363db4] Running
	I1205 19:22:31.160985  549077 system_pods.go:89] "kube-vip-ha-106302-m02" [2ec94818-bc15-4d60-95b4-e7f7235f0341] Running
	I1205 19:22:31.160989  549077 system_pods.go:89] "kube-vip-ha-106302-m03" [6e511769-148e-43eb-a4bb-6dd72dfcd11d] Running
	I1205 19:22:31.160992  549077 system_pods.go:89] "storage-provisioner" [88d6e224-b304-4f84-a162-9803400c9acf] Running
	I1205 19:22:31.161001  549077 system_pods.go:126] duration metric: took 210.065272ms to wait for k8s-apps to be running ...
	I1205 19:22:31.161014  549077 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 19:22:31.161075  549077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:22:31.179416  549077 system_svc.go:56] duration metric: took 18.393613ms WaitForService to wait for kubelet
	I1205 19:22:31.179447  549077 kubeadm.go:582] duration metric: took 25.236889217s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:22:31.179468  549077 node_conditions.go:102] verifying NodePressure condition ...
	I1205 19:22:31.345848  549077 request.go:632] Waited for 166.292279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.185:8443/api/v1/nodes
	I1205 19:22:31.345915  549077 round_trippers.go:463] GET https://192.168.39.185:8443/api/v1/nodes
	I1205 19:22:31.345920  549077 round_trippers.go:469] Request Headers:
	I1205 19:22:31.345937  549077 round_trippers.go:473]     Accept: application/json, */*
	I1205 19:22:31.345942  549077 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 19:22:31.350337  549077 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 19:22:31.351373  549077 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 19:22:31.351397  549077 node_conditions.go:123] node cpu capacity is 2
	I1205 19:22:31.351414  549077 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 19:22:31.351420  549077 node_conditions.go:123] node cpu capacity is 2
	I1205 19:22:31.351426  549077 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 19:22:31.351430  549077 node_conditions.go:123] node cpu capacity is 2
	I1205 19:22:31.351436  549077 node_conditions.go:105] duration metric: took 171.962205ms to run NodePressure ...
	I1205 19:22:31.351452  549077 start.go:241] waiting for startup goroutines ...
	I1205 19:22:31.351479  549077 start.go:255] writing updated cluster config ...
	I1205 19:22:31.351794  549077 ssh_runner.go:195] Run: rm -f paused
	I1205 19:22:31.407206  549077 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 19:22:31.410298  549077 out.go:177] * Done! kubectl is now configured to use "ha-106302" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 19:26:31 ha-106302 crio[666]: time="2024-12-05 19:26:31.762384101Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426791762360176,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=10a45b10-4902-4788-b014-6256fb2ad036 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:26:31 ha-106302 crio[666]: time="2024-12-05 19:26:31.762921590Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60751307-88ce-4709-b217-bd71484e053b name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:31 ha-106302 crio[666]: time="2024-12-05 19:26:31.763006078Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60751307-88ce-4709-b217-bd71484e053b name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:31 ha-106302 crio[666]: time="2024-12-05 19:26:31.763624762Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8175779cb574608a2e0e051ddf4963e3b0f7f7b3a0bb6082137a16800a03a08e,PodSandboxId:619925cbc39c69135172b7e76775b358b55fa47d57b5dfe0f03a5194c0692777,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733426557247240128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-p8z47,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16e14c1a-196d-42a8-b245-1a488cb9667f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7af42dff52cf31e3d0b4c5b3bb3039a69b066d99b6f46d065147ba29c75204b,PodSandboxId:95ad32628ed378cf8fe1c9cacc2bc59fc6969dc4a22ed2e11cbc6aa11f389771,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426409026160454,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sjsv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686cbc5-1b4f-44ea-89cb-70063b687718,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71878f2ac51cecfe539f367c2ff49f6bc6b40022a7dff189245bd007d0260d07,PodSandboxId:79783fce24db9824c8762aa0ebc246441d34d9d16f5b46829b9e44cac750e5b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426408724382293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-45m77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
88196078-5292-43dc-84b2-dc53af435e5c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a647561fc8a8150a221a7d9831dde01fe407024d413eda1a607ac294e573764b,PodSandboxId:ba65941872158b7f807f5608fbad458facee98a81f1ec1014ac383579eda3127,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733426408698615726,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d6e224-b304-4f84-a162-9803400c9acf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0e4de270d59927c1fd98dfbfca5bebec8750f72b7682863f1276e5cf4afe0e,PodSandboxId:5f62be7378940215f775ba016eaaba9e085a5bde8d5f3bd2af7af71b2a161ba1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733426396906111541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xr9mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2044800c-f517-439e-810b-71a114cb044e,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013c8063671c4aa3ba3a414d06a2537ce811bcd6e22e028d0ad8ab9af659022d,PodSandboxId:dc8d6361e49728eaa41e23a1d93aa34cfaa625af82fcfa2a884dd3b4f2b81c55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733426392
646389922,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw6nj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d35e1426-9151-4eb3-95fd-c2b36c126b51,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a639bf005af2020a5321599ccc56f99bd4c5be6aa0c227a6310955274ec60e3e,PodSandboxId:3cfec88984b8a0d72e94319ba62e7d4ab919d47ac556a084a2d6737ebd823e2e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173342638480
0708772,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94f9241c16c5e3fb852233a6fe3994b7,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73802addf28ef6b673245e1309d4d82c07c43374f514f1031e2a8277b4641e1a,PodSandboxId:594e9eb586b3236ea16c3700fc2cd0993924c9f7621e0cdde654b8062e9216ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733426381465280845,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44e395bdaa0336ddb64b019178e9d783,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7fcd5f7d56deb9c9698f0941fa3b61d597efc9495ed27488a425d6030baa44,PodSandboxId:c920b14cf50aa8ed9c35f9a67d873d3358f3e00a98649b822dcaf888ea4820e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733426381444138208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6cd909fedaf70356c0cea88a63589f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dec1697264029fa87be97fc70c56ce04eba1e67864a4b1b1f1e47cba052f7cf8,PodSandboxId:411118291d3f33b6d7f7a80f545d0dfdb0f0d3142d4ff4deb2a42c08e68de419,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733426381437294125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7aeab01bb9a2149eedec308e9c9b613,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c251344563e4644b942bcb793dd412b7fae15eefbb4142b68e4047db60a8fbeb,PodSandboxId:890699ae2c7d2cae9c6665fe590a645df186a046d832ec79a134309fabab3c04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733426381376403502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112c68d960b3bd38f8fac52ec570505b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=60751307-88ce-4709-b217-bd71484e053b name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:31 ha-106302 crio[666]: time="2024-12-05 19:26:31.808051862Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5322773e-68d1-450d-9cb0-a229ba4d8cd5 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:26:31 ha-106302 crio[666]: time="2024-12-05 19:26:31.808148301Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5322773e-68d1-450d-9cb0-a229ba4d8cd5 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:26:31 ha-106302 crio[666]: time="2024-12-05 19:26:31.809369708Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bbc81f44-8af3-4773-89b7-75692af7c56a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:26:31 ha-106302 crio[666]: time="2024-12-05 19:26:31.810025410Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426791810000898,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bbc81f44-8af3-4773-89b7-75692af7c56a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:26:31 ha-106302 crio[666]: time="2024-12-05 19:26:31.810564144Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=abd054d2-7096-47f4-92ff-b5f56aff5210 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:31 ha-106302 crio[666]: time="2024-12-05 19:26:31.810634002Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=abd054d2-7096-47f4-92ff-b5f56aff5210 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:31 ha-106302 crio[666]: time="2024-12-05 19:26:31.810877308Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8175779cb574608a2e0e051ddf4963e3b0f7f7b3a0bb6082137a16800a03a08e,PodSandboxId:619925cbc39c69135172b7e76775b358b55fa47d57b5dfe0f03a5194c0692777,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733426557247240128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-p8z47,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16e14c1a-196d-42a8-b245-1a488cb9667f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7af42dff52cf31e3d0b4c5b3bb3039a69b066d99b6f46d065147ba29c75204b,PodSandboxId:95ad32628ed378cf8fe1c9cacc2bc59fc6969dc4a22ed2e11cbc6aa11f389771,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426409026160454,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sjsv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686cbc5-1b4f-44ea-89cb-70063b687718,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71878f2ac51cecfe539f367c2ff49f6bc6b40022a7dff189245bd007d0260d07,PodSandboxId:79783fce24db9824c8762aa0ebc246441d34d9d16f5b46829b9e44cac750e5b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426408724382293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-45m77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
88196078-5292-43dc-84b2-dc53af435e5c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a647561fc8a8150a221a7d9831dde01fe407024d413eda1a607ac294e573764b,PodSandboxId:ba65941872158b7f807f5608fbad458facee98a81f1ec1014ac383579eda3127,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733426408698615726,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d6e224-b304-4f84-a162-9803400c9acf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0e4de270d59927c1fd98dfbfca5bebec8750f72b7682863f1276e5cf4afe0e,PodSandboxId:5f62be7378940215f775ba016eaaba9e085a5bde8d5f3bd2af7af71b2a161ba1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733426396906111541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xr9mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2044800c-f517-439e-810b-71a114cb044e,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013c8063671c4aa3ba3a414d06a2537ce811bcd6e22e028d0ad8ab9af659022d,PodSandboxId:dc8d6361e49728eaa41e23a1d93aa34cfaa625af82fcfa2a884dd3b4f2b81c55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733426392
646389922,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw6nj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d35e1426-9151-4eb3-95fd-c2b36c126b51,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a639bf005af2020a5321599ccc56f99bd4c5be6aa0c227a6310955274ec60e3e,PodSandboxId:3cfec88984b8a0d72e94319ba62e7d4ab919d47ac556a084a2d6737ebd823e2e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173342638480
0708772,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94f9241c16c5e3fb852233a6fe3994b7,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73802addf28ef6b673245e1309d4d82c07c43374f514f1031e2a8277b4641e1a,PodSandboxId:594e9eb586b3236ea16c3700fc2cd0993924c9f7621e0cdde654b8062e9216ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733426381465280845,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44e395bdaa0336ddb64b019178e9d783,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7fcd5f7d56deb9c9698f0941fa3b61d597efc9495ed27488a425d6030baa44,PodSandboxId:c920b14cf50aa8ed9c35f9a67d873d3358f3e00a98649b822dcaf888ea4820e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733426381444138208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6cd909fedaf70356c0cea88a63589f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dec1697264029fa87be97fc70c56ce04eba1e67864a4b1b1f1e47cba052f7cf8,PodSandboxId:411118291d3f33b6d7f7a80f545d0dfdb0f0d3142d4ff4deb2a42c08e68de419,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733426381437294125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7aeab01bb9a2149eedec308e9c9b613,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c251344563e4644b942bcb793dd412b7fae15eefbb4142b68e4047db60a8fbeb,PodSandboxId:890699ae2c7d2cae9c6665fe590a645df186a046d832ec79a134309fabab3c04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733426381376403502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112c68d960b3bd38f8fac52ec570505b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=abd054d2-7096-47f4-92ff-b5f56aff5210 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:31 ha-106302 crio[666]: time="2024-12-05 19:26:31.867419673Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d769ae23-b917-40d6-be4f-4c96cb227c34 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:26:31 ha-106302 crio[666]: time="2024-12-05 19:26:31.867556419Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d769ae23-b917-40d6-be4f-4c96cb227c34 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:26:31 ha-106302 crio[666]: time="2024-12-05 19:26:31.868783360Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a665acb5-3b7f-4736-b89c-b30daa3ec6ad name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:26:31 ha-106302 crio[666]: time="2024-12-05 19:26:31.869255592Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426791869227479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a665acb5-3b7f-4736-b89c-b30daa3ec6ad name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:26:31 ha-106302 crio[666]: time="2024-12-05 19:26:31.869838976Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c179ce5a-ef62-4110-a053-8e6f6246846a name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:31 ha-106302 crio[666]: time="2024-12-05 19:26:31.869915002Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c179ce5a-ef62-4110-a053-8e6f6246846a name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:31 ha-106302 crio[666]: time="2024-12-05 19:26:31.870877628Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8175779cb574608a2e0e051ddf4963e3b0f7f7b3a0bb6082137a16800a03a08e,PodSandboxId:619925cbc39c69135172b7e76775b358b55fa47d57b5dfe0f03a5194c0692777,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733426557247240128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-p8z47,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16e14c1a-196d-42a8-b245-1a488cb9667f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7af42dff52cf31e3d0b4c5b3bb3039a69b066d99b6f46d065147ba29c75204b,PodSandboxId:95ad32628ed378cf8fe1c9cacc2bc59fc6969dc4a22ed2e11cbc6aa11f389771,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426409026160454,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sjsv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686cbc5-1b4f-44ea-89cb-70063b687718,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71878f2ac51cecfe539f367c2ff49f6bc6b40022a7dff189245bd007d0260d07,PodSandboxId:79783fce24db9824c8762aa0ebc246441d34d9d16f5b46829b9e44cac750e5b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426408724382293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-45m77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
88196078-5292-43dc-84b2-dc53af435e5c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a647561fc8a8150a221a7d9831dde01fe407024d413eda1a607ac294e573764b,PodSandboxId:ba65941872158b7f807f5608fbad458facee98a81f1ec1014ac383579eda3127,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733426408698615726,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d6e224-b304-4f84-a162-9803400c9acf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0e4de270d59927c1fd98dfbfca5bebec8750f72b7682863f1276e5cf4afe0e,PodSandboxId:5f62be7378940215f775ba016eaaba9e085a5bde8d5f3bd2af7af71b2a161ba1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733426396906111541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xr9mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2044800c-f517-439e-810b-71a114cb044e,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013c8063671c4aa3ba3a414d06a2537ce811bcd6e22e028d0ad8ab9af659022d,PodSandboxId:dc8d6361e49728eaa41e23a1d93aa34cfaa625af82fcfa2a884dd3b4f2b81c55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733426392
646389922,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw6nj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d35e1426-9151-4eb3-95fd-c2b36c126b51,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a639bf005af2020a5321599ccc56f99bd4c5be6aa0c227a6310955274ec60e3e,PodSandboxId:3cfec88984b8a0d72e94319ba62e7d4ab919d47ac556a084a2d6737ebd823e2e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173342638480
0708772,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94f9241c16c5e3fb852233a6fe3994b7,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73802addf28ef6b673245e1309d4d82c07c43374f514f1031e2a8277b4641e1a,PodSandboxId:594e9eb586b3236ea16c3700fc2cd0993924c9f7621e0cdde654b8062e9216ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733426381465280845,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44e395bdaa0336ddb64b019178e9d783,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7fcd5f7d56deb9c9698f0941fa3b61d597efc9495ed27488a425d6030baa44,PodSandboxId:c920b14cf50aa8ed9c35f9a67d873d3358f3e00a98649b822dcaf888ea4820e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733426381444138208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6cd909fedaf70356c0cea88a63589f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dec1697264029fa87be97fc70c56ce04eba1e67864a4b1b1f1e47cba052f7cf8,PodSandboxId:411118291d3f33b6d7f7a80f545d0dfdb0f0d3142d4ff4deb2a42c08e68de419,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733426381437294125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7aeab01bb9a2149eedec308e9c9b613,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c251344563e4644b942bcb793dd412b7fae15eefbb4142b68e4047db60a8fbeb,PodSandboxId:890699ae2c7d2cae9c6665fe590a645df186a046d832ec79a134309fabab3c04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733426381376403502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112c68d960b3bd38f8fac52ec570505b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c179ce5a-ef62-4110-a053-8e6f6246846a name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:31 ha-106302 crio[666]: time="2024-12-05 19:26:31.923084404Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8a914ba4-7151-40c0-a468-f9d4113c4ccc name=/runtime.v1.RuntimeService/Version
	Dec 05 19:26:31 ha-106302 crio[666]: time="2024-12-05 19:26:31.923158827Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8a914ba4-7151-40c0-a468-f9d4113c4ccc name=/runtime.v1.RuntimeService/Version
	Dec 05 19:26:31 ha-106302 crio[666]: time="2024-12-05 19:26:31.926015519Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0466e0b1-36c0-43d0-b400-92ff9ad6c925 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:26:31 ha-106302 crio[666]: time="2024-12-05 19:26:31.926791842Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426791926754053,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0466e0b1-36c0-43d0-b400-92ff9ad6c925 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:26:31 ha-106302 crio[666]: time="2024-12-05 19:26:31.927440790Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e48463c5-a7af-4dee-9fc5-1cb999966a40 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:31 ha-106302 crio[666]: time="2024-12-05 19:26:31.927547866Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e48463c5-a7af-4dee-9fc5-1cb999966a40 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:26:31 ha-106302 crio[666]: time="2024-12-05 19:26:31.927794286Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8175779cb574608a2e0e051ddf4963e3b0f7f7b3a0bb6082137a16800a03a08e,PodSandboxId:619925cbc39c69135172b7e76775b358b55fa47d57b5dfe0f03a5194c0692777,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733426557247240128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-p8z47,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16e14c1a-196d-42a8-b245-1a488cb9667f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7af42dff52cf31e3d0b4c5b3bb3039a69b066d99b6f46d065147ba29c75204b,PodSandboxId:95ad32628ed378cf8fe1c9cacc2bc59fc6969dc4a22ed2e11cbc6aa11f389771,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426409026160454,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sjsv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686cbc5-1b4f-44ea-89cb-70063b687718,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71878f2ac51cecfe539f367c2ff49f6bc6b40022a7dff189245bd007d0260d07,PodSandboxId:79783fce24db9824c8762aa0ebc246441d34d9d16f5b46829b9e44cac750e5b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733426408724382293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-45m77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
88196078-5292-43dc-84b2-dc53af435e5c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a647561fc8a8150a221a7d9831dde01fe407024d413eda1a607ac294e573764b,PodSandboxId:ba65941872158b7f807f5608fbad458facee98a81f1ec1014ac383579eda3127,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733426408698615726,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d6e224-b304-4f84-a162-9803400c9acf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0e4de270d59927c1fd98dfbfca5bebec8750f72b7682863f1276e5cf4afe0e,PodSandboxId:5f62be7378940215f775ba016eaaba9e085a5bde8d5f3bd2af7af71b2a161ba1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733426396906111541,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xr9mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2044800c-f517-439e-810b-71a114cb044e,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013c8063671c4aa3ba3a414d06a2537ce811bcd6e22e028d0ad8ab9af659022d,PodSandboxId:dc8d6361e49728eaa41e23a1d93aa34cfaa625af82fcfa2a884dd3b4f2b81c55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733426392
646389922,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zw6nj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d35e1426-9151-4eb3-95fd-c2b36c126b51,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a639bf005af2020a5321599ccc56f99bd4c5be6aa0c227a6310955274ec60e3e,PodSandboxId:3cfec88984b8a0d72e94319ba62e7d4ab919d47ac556a084a2d6737ebd823e2e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173342638480
0708772,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94f9241c16c5e3fb852233a6fe3994b7,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73802addf28ef6b673245e1309d4d82c07c43374f514f1031e2a8277b4641e1a,PodSandboxId:594e9eb586b3236ea16c3700fc2cd0993924c9f7621e0cdde654b8062e9216ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733426381465280845,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44e395bdaa0336ddb64b019178e9d783,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7fcd5f7d56deb9c9698f0941fa3b61d597efc9495ed27488a425d6030baa44,PodSandboxId:c920b14cf50aa8ed9c35f9a67d873d3358f3e00a98649b822dcaf888ea4820e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733426381444138208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd6cd909fedaf70356c0cea88a63589f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dec1697264029fa87be97fc70c56ce04eba1e67864a4b1b1f1e47cba052f7cf8,PodSandboxId:411118291d3f33b6d7f7a80f545d0dfdb0f0d3142d4ff4deb2a42c08e68de419,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733426381437294125,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7aeab01bb9a2149eedec308e9c9b613,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c251344563e4644b942bcb793dd412b7fae15eefbb4142b68e4047db60a8fbeb,PodSandboxId:890699ae2c7d2cae9c6665fe590a645df186a046d832ec79a134309fabab3c04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733426381376403502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-106302,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112c68d960b3bd38f8fac52ec570505b,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e48463c5-a7af-4dee-9fc5-1cb999966a40 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8175779cb5746       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   619925cbc39c6       busybox-7dff88458-p8z47
	d7af42dff52cf       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   95ad32628ed37       coredns-7c65d6cfc9-sjsv2
	71878f2ac51ce       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   79783fce24db9       coredns-7c65d6cfc9-45m77
	a647561fc8a81       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   ba65941872158       storage-provisioner
	8e0e4de270d59       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16    6 minutes ago       Running             kindnet-cni               0                   5f62be7378940       kindnet-xr9mh
	013c8063671c4       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   dc8d6361e4972       kube-proxy-zw6nj
	a639bf005af20       ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e     6 minutes ago       Running             kube-vip                  0                   3cfec88984b8a       kube-vip-ha-106302
	73802addf28ef       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   594e9eb586b32       etcd-ha-106302
	8d7fcd5f7d56d       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   c920b14cf50aa       kube-apiserver-ha-106302
	dec1697264029       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   411118291d3f3       kube-scheduler-ha-106302
	c251344563e46       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   890699ae2c7d2       kube-controller-manager-ha-106302
	
	
	==> coredns [71878f2ac51cecfe539f367c2ff49f6bc6b40022a7dff189245bd007d0260d07] <==
	[INFO] 127.0.0.1:37176 - 32561 "HINFO IN 3495974066793148999.5277118907247610982. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022894865s
	[INFO] 10.244.1.2:51203 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.01735349s
	[INFO] 10.244.2.2:37733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000272502s
	[INFO] 10.244.2.2:53757 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001751263s
	[INFO] 10.244.2.2:54738 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000495007s
	[INFO] 10.244.0.4:45576 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000412263s
	[INFO] 10.244.0.4:48159 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000083837s
	[INFO] 10.244.1.2:34578 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000302061s
	[INFO] 10.244.1.2:54721 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000235254s
	[INFO] 10.244.1.2:43877 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000206178s
	[INFO] 10.244.1.2:35725 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00012413s
	[INFO] 10.244.2.2:53111 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00036507s
	[INFO] 10.244.2.2:60205 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00019223s
	[INFO] 10.244.2.2:49031 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000279282s
	[INFO] 10.244.1.2:48336 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000174589s
	[INFO] 10.244.1.2:47520 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000164259s
	[INFO] 10.244.1.2:58000 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119136s
	[INFO] 10.244.1.2:52602 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000196285s
	[INFO] 10.244.2.2:53065 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143333s
	[INFO] 10.244.0.4:50807 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119749s
	[INFO] 10.244.0.4:60692 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073699s
	[INFO] 10.244.1.2:46283 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000281341s
	[INFO] 10.244.1.2:51750 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000153725s
	[INFO] 10.244.2.2:33715 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000141245s
	[INFO] 10.244.0.4:40497 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000233306s
	
	
	==> coredns [d7af42dff52cf31e3d0b4c5b3bb3039a69b066d99b6f46d065147ba29c75204b] <==
	[INFO] 10.244.2.2:53827 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001485777s
	[INFO] 10.244.2.2:55594 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000308847s
	[INFO] 10.244.2.2:34459 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118477s
	[INFO] 10.244.2.2:39473 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062912s
	[INFO] 10.244.0.4:50797 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000084736s
	[INFO] 10.244.0.4:49715 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001903972s
	[INFO] 10.244.0.4:60150 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000344373s
	[INFO] 10.244.0.4:43238 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000075717s
	[INFO] 10.244.0.4:55133 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001508595s
	[INFO] 10.244.0.4:49161 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071435s
	[INFO] 10.244.0.4:34396 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000048471s
	[INFO] 10.244.0.4:40602 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000037032s
	[INFO] 10.244.2.2:46010 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013718s
	[INFO] 10.244.2.2:59322 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108224s
	[INFO] 10.244.2.2:38750 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000154868s
	[INFO] 10.244.0.4:43291 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123825s
	[INFO] 10.244.0.4:44515 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000163484s
	[INFO] 10.244.1.2:60479 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154514s
	[INFO] 10.244.1.2:42615 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000210654s
	[INFO] 10.244.2.2:57422 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132377s
	[INFO] 10.244.2.2:51037 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00039203s
	[INFO] 10.244.2.2:35850 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000148988s
	[INFO] 10.244.0.4:37661 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000206627s
	[INFO] 10.244.0.4:43810 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000129193s
	[INFO] 10.244.0.4:47355 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000145369s
	
	
	==> describe nodes <==
	Name:               ha-106302
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-106302
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=ha-106302
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T19_19_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 19:19:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-106302
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 19:26:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 19:22:51 +0000   Thu, 05 Dec 2024 19:19:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 19:22:51 +0000   Thu, 05 Dec 2024 19:19:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 19:22:51 +0000   Thu, 05 Dec 2024 19:19:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 19:22:51 +0000   Thu, 05 Dec 2024 19:20:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.185
	  Hostname:    ha-106302
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9fbfe8f29ea445c2a705d4735bab42d9
	  System UUID:                9fbfe8f2-9ea4-45c2-a705-d4735bab42d9
	  Boot ID:                    fbdd1078-6187-4d3e-90aa-6ba60d4d7163
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-p8z47              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 coredns-7c65d6cfc9-45m77             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m40s
	  kube-system                 coredns-7c65d6cfc9-sjsv2             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m40s
	  kube-system                 etcd-ha-106302                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m45s
	  kube-system                 kindnet-xr9mh                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m40s
	  kube-system                 kube-apiserver-ha-106302             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m45s
	  kube-system                 kube-controller-manager-ha-106302    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m45s
	  kube-system                 kube-proxy-zw6nj                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m40s
	  kube-system                 kube-scheduler-ha-106302             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m45s
	  kube-system                 kube-vip-ha-106302                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m47s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m39s  kube-proxy       
	  Normal  Starting                 6m45s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m45s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m45s  kubelet          Node ha-106302 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m45s  kubelet          Node ha-106302 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m45s  kubelet          Node ha-106302 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m41s  node-controller  Node ha-106302 event: Registered Node ha-106302 in Controller
	  Normal  NodeReady                6m24s  kubelet          Node ha-106302 status is now: NodeReady
	  Normal  RegisteredNode           5m36s  node-controller  Node ha-106302 event: Registered Node ha-106302 in Controller
	  Normal  RegisteredNode           4m21s  node-controller  Node ha-106302 event: Registered Node ha-106302 in Controller
	
	
	Name:               ha-106302-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-106302-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=ha-106302
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_05T19_20_50_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 19:20:47 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-106302-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 19:23:51 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 05 Dec 2024 19:22:50 +0000   Thu, 05 Dec 2024 19:24:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 05 Dec 2024 19:22:50 +0000   Thu, 05 Dec 2024 19:24:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 05 Dec 2024 19:22:50 +0000   Thu, 05 Dec 2024 19:24:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 05 Dec 2024 19:22:50 +0000   Thu, 05 Dec 2024 19:24:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.22
	  Hostname:    ha-106302-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3ca37a23968d4b139155a7b713c26828
	  System UUID:                3ca37a23-968d-4b13-9155-a7b713c26828
	  Boot ID:                    36db6c69-1ef9-45e9-8548-ed0c2d08168d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9kxtc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 etcd-ha-106302-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m43s
	  kube-system                 kindnet-thcsp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m45s
	  kube-system                 kube-apiserver-ha-106302-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m44s
	  kube-system                 kube-controller-manager-ha-106302-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m41s
	  kube-system                 kube-proxy-n57lf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m45s
	  kube-system                 kube-scheduler-ha-106302-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m44s
	  kube-system                 kube-vip-ha-106302-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m40s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m45s (x8 over 5m45s)  kubelet          Node ha-106302-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m45s (x8 over 5m45s)  kubelet          Node ha-106302-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m45s (x7 over 5m45s)  kubelet          Node ha-106302-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m41s                  node-controller  Node ha-106302-m02 event: Registered Node ha-106302-m02 in Controller
	  Normal  RegisteredNode           5m36s                  node-controller  Node ha-106302-m02 event: Registered Node ha-106302-m02 in Controller
	  Normal  RegisteredNode           4m21s                  node-controller  Node ha-106302-m02 event: Registered Node ha-106302-m02 in Controller
	  Normal  NodeNotReady             116s                   node-controller  Node ha-106302-m02 status is now: NodeNotReady
	
	
	Name:               ha-106302-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-106302-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=ha-106302
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_05T19_22_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 19:22:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-106302-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 19:26:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 19:23:03 +0000   Thu, 05 Dec 2024 19:22:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 19:23:03 +0000   Thu, 05 Dec 2024 19:22:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 19:23:03 +0000   Thu, 05 Dec 2024 19:22:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 19:23:03 +0000   Thu, 05 Dec 2024 19:22:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.151
	  Hostname:    ha-106302-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c79436ccca5a4dcb864b64b8f1638e64
	  System UUID:                c79436cc-ca5a-4dcb-864b-64b8f1638e64
	  Boot ID:                    c0d22d1e-5115-47a7-a1b2-4a76f9bfc0f7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9tp62                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 etcd-ha-106302-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m28s
	  kube-system                 kindnet-wdsv9                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m30s
	  kube-system                 kube-apiserver-ha-106302-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 kube-controller-manager-ha-106302-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 kube-proxy-pghdx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 kube-scheduler-ha-106302-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 kube-vip-ha-106302-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m25s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m30s (x8 over 4m30s)  kubelet          Node ha-106302-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m30s (x8 over 4m30s)  kubelet          Node ha-106302-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m30s (x7 over 4m30s)  kubelet          Node ha-106302-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m26s                  node-controller  Node ha-106302-m03 event: Registered Node ha-106302-m03 in Controller
	  Normal  RegisteredNode           4m26s                  node-controller  Node ha-106302-m03 event: Registered Node ha-106302-m03 in Controller
	  Normal  RegisteredNode           4m21s                  node-controller  Node ha-106302-m03 event: Registered Node ha-106302-m03 in Controller
	
	
	Name:               ha-106302-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-106302-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=ha-106302
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_05T19_23_10_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 19:23:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-106302-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 19:26:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 19:23:41 +0000   Thu, 05 Dec 2024 19:23:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 19:23:41 +0000   Thu, 05 Dec 2024 19:23:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 19:23:41 +0000   Thu, 05 Dec 2024 19:23:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 19:23:41 +0000   Thu, 05 Dec 2024 19:23:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.7
	  Hostname:    ha-106302-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 230adc0a6a8a4784a2711e0f05c0dc5c
	  System UUID:                230adc0a-6a8a-4784-a271-1e0f05c0dc5c
	  Boot ID:                    c550c7a6-b9cf-4484-890e-5c6b9b697be6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4x5qd       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m22s
	  kube-system                 kube-proxy-2dvtn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m16s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  3m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     3m22s                  cidrAllocator    Node ha-106302-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  3m22s (x2 over 3m23s)  kubelet          Node ha-106302-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m22s (x2 over 3m23s)  kubelet          Node ha-106302-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m22s (x2 over 3m23s)  kubelet          Node ha-106302-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m21s                  node-controller  Node ha-106302-m04 event: Registered Node ha-106302-m04 in Controller
	  Normal  RegisteredNode           3m21s                  node-controller  Node ha-106302-m04 event: Registered Node ha-106302-m04 in Controller
	  Normal  RegisteredNode           3m21s                  node-controller  Node ha-106302-m04 event: Registered Node ha-106302-m04 in Controller
	  Normal  NodeReady                3m1s                   kubelet          Node ha-106302-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 5 19:19] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052678] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040068] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.967635] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.737822] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.642469] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.132933] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.059010] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077817] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.173461] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.135588] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.266467] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +4.207512] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[  +3.975007] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.063464] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.124511] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.093371] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.093366] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.201097] kauditd_printk_skb: 34 callbacks suppressed
	[Dec 5 19:20] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [73802addf28ef6b673245e1309d4d82c07c43374f514f1031e2a8277b4641e1a] <==
	{"level":"warn","ts":"2024-12-05T19:26:32.065680Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:32.081908Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:32.165119Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:32.235058Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:32.244047Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:32.248155Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:32.253363Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:32.265338Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:32.323107Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:32.330292Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:32.337630Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:32.342859Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:32.348860Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:32.357089Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:32.363390Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:32.365801Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:32.371022Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:32.374616Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:32.378542Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:32.384119Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:32.394204Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:32.402134Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T19:26:32.430074Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.22:2380/version","remote-member-id":"6cb04787fcad1ce5","error":"Get \"https://192.168.39.22:2380/version\": dial tcp 192.168.39.22:2380: i/o timeout"}
	{"level":"warn","ts":"2024-12-05T19:26:32.430156Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"6cb04787fcad1ce5","error":"Get \"https://192.168.39.22:2380/version\": dial tcp 192.168.39.22:2380: i/o timeout"}
	{"level":"warn","ts":"2024-12-05T19:26:32.465123Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8fbc2df34e14192d","from":"8fbc2df34e14192d","remote-peer-id":"6cb04787fcad1ce5","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:26:32 up 7 min,  0 users,  load average: 0.39, 0.30, 0.15
	Linux ha-106302 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8e0e4de270d59927c1fd98dfbfca5bebec8750f72b7682863f1276e5cf4afe0e] <==
	I1205 19:25:58.032961       1 main.go:324] Node ha-106302-m04 has CIDR [10.244.3.0/24] 
	I1205 19:26:08.033900       1 main.go:297] Handling node with IPs: map[192.168.39.185:{}]
	I1205 19:26:08.033997       1 main.go:301] handling current node
	I1205 19:26:08.034040       1 main.go:297] Handling node with IPs: map[192.168.39.22:{}]
	I1205 19:26:08.034061       1 main.go:324] Node ha-106302-m02 has CIDR [10.244.1.0/24] 
	I1205 19:26:08.034788       1 main.go:297] Handling node with IPs: map[192.168.39.151:{}]
	I1205 19:26:08.034868       1 main.go:324] Node ha-106302-m03 has CIDR [10.244.2.0/24] 
	I1205 19:26:08.035323       1 main.go:297] Handling node with IPs: map[192.168.39.7:{}]
	I1205 19:26:08.036186       1 main.go:324] Node ha-106302-m04 has CIDR [10.244.3.0/24] 
	I1205 19:26:18.031621       1 main.go:297] Handling node with IPs: map[192.168.39.185:{}]
	I1205 19:26:18.031663       1 main.go:301] handling current node
	I1205 19:26:18.031679       1 main.go:297] Handling node with IPs: map[192.168.39.22:{}]
	I1205 19:26:18.031683       1 main.go:324] Node ha-106302-m02 has CIDR [10.244.1.0/24] 
	I1205 19:26:18.031927       1 main.go:297] Handling node with IPs: map[192.168.39.151:{}]
	I1205 19:26:18.031962       1 main.go:324] Node ha-106302-m03 has CIDR [10.244.2.0/24] 
	I1205 19:26:18.032073       1 main.go:297] Handling node with IPs: map[192.168.39.7:{}]
	I1205 19:26:18.032101       1 main.go:324] Node ha-106302-m04 has CIDR [10.244.3.0/24] 
	I1205 19:26:28.040181       1 main.go:297] Handling node with IPs: map[192.168.39.185:{}]
	I1205 19:26:28.040364       1 main.go:301] handling current node
	I1205 19:26:28.040415       1 main.go:297] Handling node with IPs: map[192.168.39.22:{}]
	I1205 19:26:28.040435       1 main.go:324] Node ha-106302-m02 has CIDR [10.244.1.0/24] 
	I1205 19:26:28.040849       1 main.go:297] Handling node with IPs: map[192.168.39.151:{}]
	I1205 19:26:28.040902       1 main.go:324] Node ha-106302-m03 has CIDR [10.244.2.0/24] 
	I1205 19:26:28.041100       1 main.go:297] Handling node with IPs: map[192.168.39.7:{}]
	I1205 19:26:28.041125       1 main.go:324] Node ha-106302-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8d7fcd5f7d56deb9c9698f0941fa3b61d597efc9495ed27488a425d6030baa44] <==
	W1205 19:19:46.101456       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.185]
	I1205 19:19:46.102689       1 controller.go:615] quota admission added evaluator for: endpoints
	I1205 19:19:46.107444       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 19:19:46.330379       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1205 19:19:47.696704       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1205 19:19:47.715088       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1205 19:19:47.729079       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1205 19:19:52.034082       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1205 19:19:52.100936       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1205 19:22:38.001032       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32830: use of closed network connection
	E1205 19:22:38.204236       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32840: use of closed network connection
	E1205 19:22:38.401399       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32852: use of closed network connection
	E1205 19:22:38.650810       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32868: use of closed network connection
	E1205 19:22:38.848239       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32882: use of closed network connection
	E1205 19:22:39.039033       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32892: use of closed network connection
	E1205 19:22:39.233185       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32904: use of closed network connection
	E1205 19:22:39.423024       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32930: use of closed network connection
	E1205 19:22:39.623335       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32946: use of closed network connection
	E1205 19:22:39.929919       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32972: use of closed network connection
	E1205 19:22:40.109732       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:32994: use of closed network connection
	E1205 19:22:40.313792       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33004: use of closed network connection
	E1205 19:22:40.512273       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33032: use of closed network connection
	E1205 19:22:40.696838       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33064: use of closed network connection
	E1205 19:22:40.891466       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33092: use of closed network connection
	W1205 19:23:56.103047       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.151 192.168.39.185]
	
	
	==> kube-controller-manager [c251344563e4644b942bcb793dd412b7fae15eefbb4142b68e4047db60a8fbeb] <==
	I1205 19:22:37.515258       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="61.952µs"
	I1205 19:22:50.027185       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m02"
	I1205 19:22:51.994933       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302"
	I1205 19:23:03.348987       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m03"
	I1205 19:23:10.074709       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-106302-m04\" does not exist"
	I1205 19:23:10.130455       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-106302-m04" podCIDRs=["10.244.3.0/24"]
	I1205 19:23:10.130559       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:10.130592       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:10.405830       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:10.799985       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:11.200921       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-106302-m04"
	I1205 19:23:11.286372       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:20.510971       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:31.164993       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:31.165813       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-106302-m04"
	I1205 19:23:31.181172       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:31.224422       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:23:41.047269       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m04"
	I1205 19:24:36.318018       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m02"
	I1205 19:24:36.318367       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-106302-m04"
	I1205 19:24:36.348027       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m02"
	I1205 19:24:36.462551       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="17.68033ms"
	I1205 19:24:36.463140       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="102.944µs"
	I1205 19:24:36.509355       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m02"
	I1205 19:24:41.525728       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-106302-m02"
	
	
	==> kube-proxy [013c8063671c4aa3ba3a414d06a2537ce811bcd6e22e028d0ad8ab9af659022d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 19:19:53.137314       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1205 19:19:53.171420       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.185"]
	E1205 19:19:53.171824       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 19:19:53.214655       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 19:19:53.214741       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 19:19:53.214788       1 server_linux.go:169] "Using iptables Proxier"
	I1205 19:19:53.217916       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 19:19:53.218705       1 server.go:483] "Version info" version="v1.31.2"
	I1205 19:19:53.218777       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 19:19:53.220962       1 config.go:199] "Starting service config controller"
	I1205 19:19:53.221650       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 19:19:53.221992       1 config.go:105] "Starting endpoint slice config controller"
	I1205 19:19:53.222064       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 19:19:53.223609       1 config.go:328] "Starting node config controller"
	I1205 19:19:53.226006       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 19:19:53.322722       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 19:19:53.322841       1 shared_informer.go:320] Caches are synced for service config
	I1205 19:19:53.326785       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [dec1697264029fa87be97fc70c56ce04eba1e67864a4b1b1f1e47cba052f7cf8] <==
	W1205 19:19:45.698374       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 19:19:45.698482       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 19:19:45.740149       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 19:19:45.740541       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1205 19:19:48.195246       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1205 19:22:02.375222       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tpm2m\": pod kube-proxy-tpm2m is already assigned to node \"ha-106302-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tpm2m" node="ha-106302-m03"
	E1205 19:22:02.375416       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 1976f453-f240-48ff-bcac-37351800ac58(kube-system/kube-proxy-tpm2m) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-tpm2m"
	E1205 19:22:02.375449       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tpm2m\": pod kube-proxy-tpm2m is already assigned to node \"ha-106302-m03\"" pod="kube-system/kube-proxy-tpm2m"
	I1205 19:22:02.375580       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-tpm2m" node="ha-106302-m03"
	E1205 19:22:02.382616       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-wdsv9\": pod kindnet-wdsv9 is already assigned to node \"ha-106302-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-wdsv9" node="ha-106302-m03"
	E1205 19:22:02.382763       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 83d82f5d-42c3-47be-af20-41b82c16b114(kube-system/kindnet-wdsv9) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-wdsv9"
	E1205 19:22:02.382784       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-wdsv9\": pod kindnet-wdsv9 is already assigned to node \"ha-106302-m03\"" pod="kube-system/kindnet-wdsv9"
	I1205 19:22:02.382811       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-wdsv9" node="ha-106302-m03"
	E1205 19:22:02.429049       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-pghdx\": pod kube-proxy-pghdx is already assigned to node \"ha-106302-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-pghdx" node="ha-106302-m03"
	E1205 19:22:02.429116       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 915060a3-353c-4a2c-a9d6-494206776446(kube-system/kube-proxy-pghdx) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-pghdx"
	E1205 19:22:02.429132       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-pghdx\": pod kube-proxy-pghdx is already assigned to node \"ha-106302-m03\"" pod="kube-system/kube-proxy-pghdx"
	I1205 19:22:02.429156       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-pghdx" node="ha-106302-m03"
	E1205 19:22:32.450165       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-p8z47\": pod busybox-7dff88458-p8z47 is already assigned to node \"ha-106302\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-p8z47" node="ha-106302"
	E1205 19:22:32.450464       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 16e14c1a-196d-42a8-b245-1a488cb9667f(default/busybox-7dff88458-p8z47) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-p8z47"
	E1205 19:22:32.450610       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-p8z47\": pod busybox-7dff88458-p8z47 is already assigned to node \"ha-106302\"" pod="default/busybox-7dff88458-p8z47"
	I1205 19:22:32.450729       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-p8z47" node="ha-106302"
	E1205 19:22:32.450776       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-9tp62\": pod busybox-7dff88458-9tp62 is already assigned to node \"ha-106302-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-9tp62" node="ha-106302-m03"
	E1205 19:22:32.459571       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod afb0c778-acb1-4db0-b0b6-f054049d0a9d(default/busybox-7dff88458-9tp62) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-9tp62"
	E1205 19:22:32.460188       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-9tp62\": pod busybox-7dff88458-9tp62 is already assigned to node \"ha-106302-m03\"" pod="default/busybox-7dff88458-9tp62"
	I1205 19:22:32.460282       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-9tp62" node="ha-106302-m03"
	
	
	==> kubelet <==
	Dec 05 19:24:57 ha-106302 kubelet[1308]: E1205 19:24:57.781563    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426697781244346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:24:57 ha-106302 kubelet[1308]: E1205 19:24:57.781621    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426697781244346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:07 ha-106302 kubelet[1308]: E1205 19:25:07.783663    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426707783267296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:07 ha-106302 kubelet[1308]: E1205 19:25:07.783686    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426707783267296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:17 ha-106302 kubelet[1308]: E1205 19:25:17.787301    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426717786088822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:17 ha-106302 kubelet[1308]: E1205 19:25:17.788092    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426717786088822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:27 ha-106302 kubelet[1308]: E1205 19:25:27.791254    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426727789306197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:27 ha-106302 kubelet[1308]: E1205 19:25:27.792185    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426727789306197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:37 ha-106302 kubelet[1308]: E1205 19:25:37.793643    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426737793262536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:37 ha-106302 kubelet[1308]: E1205 19:25:37.793688    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426737793262536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:47 ha-106302 kubelet[1308]: E1205 19:25:47.685793    1308 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 05 19:25:47 ha-106302 kubelet[1308]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 05 19:25:47 ha-106302 kubelet[1308]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 19:25:47 ha-106302 kubelet[1308]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 19:25:47 ha-106302 kubelet[1308]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 19:25:47 ha-106302 kubelet[1308]: E1205 19:25:47.795235    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426747794906816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:47 ha-106302 kubelet[1308]: E1205 19:25:47.795258    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426747794906816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:57 ha-106302 kubelet[1308]: E1205 19:25:57.797302    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426757796435936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:25:57 ha-106302 kubelet[1308]: E1205 19:25:57.798201    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426757796435936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:26:07 ha-106302 kubelet[1308]: E1205 19:26:07.800104    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426767799828720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:26:07 ha-106302 kubelet[1308]: E1205 19:26:07.800714    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426767799828720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:26:17 ha-106302 kubelet[1308]: E1205 19:26:17.806169    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426777803286232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:26:17 ha-106302 kubelet[1308]: E1205 19:26:17.806235    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426777803286232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:26:27 ha-106302 kubelet[1308]: E1205 19:26:27.811914    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426787809189066,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 19:26:27 ha-106302 kubelet[1308]: E1205 19:26:27.812304    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733426787809189066,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-106302 -n ha-106302
helpers_test.go:261: (dbg) Run:  kubectl --context ha-106302 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (360.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-106302 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-106302 -v=7 --alsologtostderr
E1205 19:28:15.012973  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-106302 -v=7 --alsologtostderr: exit status 82 (2m1.997681942s)

                                                
                                                
-- stdout --
	* Stopping node "ha-106302-m04"  ...
	* Stopping node "ha-106302-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 19:26:33.523350  554371 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:26:33.523617  554371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:26:33.523628  554371 out.go:358] Setting ErrFile to fd 2...
	I1205 19:26:33.523633  554371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:26:33.523865  554371 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 19:26:33.524147  554371 out.go:352] Setting JSON to false
	I1205 19:26:33.524246  554371 mustload.go:65] Loading cluster: ha-106302
	I1205 19:26:33.524773  554371 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:26:33.524866  554371 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:26:33.525162  554371 mustload.go:65] Loading cluster: ha-106302
	I1205 19:26:33.525384  554371 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:26:33.525453  554371 stop.go:39] StopHost: ha-106302-m04
	I1205 19:26:33.526056  554371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:26:33.526129  554371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:26:33.542049  554371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39057
	I1205 19:26:33.542684  554371 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:26:33.543397  554371 main.go:141] libmachine: Using API Version  1
	I1205 19:26:33.543426  554371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:26:33.543805  554371 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:26:33.546286  554371 out.go:177] * Stopping node "ha-106302-m04"  ...
	I1205 19:26:33.548043  554371 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1205 19:26:33.548100  554371 main.go:141] libmachine: (ha-106302-m04) Calling .DriverName
	I1205 19:26:33.548382  554371 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1205 19:26:33.548414  554371 main.go:141] libmachine: (ha-106302-m04) Calling .GetSSHHostname
	I1205 19:26:33.551567  554371 main.go:141] libmachine: (ha-106302-m04) DBG | domain ha-106302-m04 has defined MAC address 52:54:00:74:92:b5 in network mk-ha-106302
	I1205 19:26:33.551956  554371 main.go:141] libmachine: (ha-106302-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:92:b5", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:22:57 +0000 UTC Type:0 Mac:52:54:00:74:92:b5 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-106302-m04 Clientid:01:52:54:00:74:92:b5}
	I1205 19:26:33.551979  554371 main.go:141] libmachine: (ha-106302-m04) DBG | domain ha-106302-m04 has defined IP address 192.168.39.7 and MAC address 52:54:00:74:92:b5 in network mk-ha-106302
	I1205 19:26:33.552117  554371 main.go:141] libmachine: (ha-106302-m04) Calling .GetSSHPort
	I1205 19:26:33.552335  554371 main.go:141] libmachine: (ha-106302-m04) Calling .GetSSHKeyPath
	I1205 19:26:33.552468  554371 main.go:141] libmachine: (ha-106302-m04) Calling .GetSSHUsername
	I1205 19:26:33.552618  554371 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m04/id_rsa Username:docker}
	I1205 19:26:33.647772  554371 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1205 19:26:33.702402  554371 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1205 19:26:33.757021  554371 main.go:141] libmachine: Stopping "ha-106302-m04"...
	I1205 19:26:33.757058  554371 main.go:141] libmachine: (ha-106302-m04) Calling .GetState
	I1205 19:26:33.758675  554371 main.go:141] libmachine: (ha-106302-m04) Calling .Stop
	I1205 19:26:33.762554  554371 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 0/120
	I1205 19:26:35.024367  554371 main.go:141] libmachine: (ha-106302-m04) Calling .GetState
	I1205 19:26:35.025650  554371 main.go:141] libmachine: Machine "ha-106302-m04" was stopped.
	I1205 19:26:35.025680  554371 stop.go:75] duration metric: took 1.477632778s to stop
	I1205 19:26:35.025712  554371 stop.go:39] StopHost: ha-106302-m03
	I1205 19:26:35.026034  554371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:26:35.026092  554371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:26:35.042152  554371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36681
	I1205 19:26:35.042766  554371 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:26:35.043508  554371 main.go:141] libmachine: Using API Version  1
	I1205 19:26:35.043535  554371 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:26:35.043874  554371 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:26:35.045757  554371 out.go:177] * Stopping node "ha-106302-m03"  ...
	I1205 19:26:35.047107  554371 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1205 19:26:35.047143  554371 main.go:141] libmachine: (ha-106302-m03) Calling .DriverName
	I1205 19:26:35.047430  554371 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1205 19:26:35.047454  554371 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHHostname
	I1205 19:26:35.050446  554371 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:26:35.050863  554371 main.go:141] libmachine: (ha-106302-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:65:e2", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:21:29 +0000 UTC Type:0 Mac:52:54:00:e6:65:e2 Iaid: IPaddr:192.168.39.151 Prefix:24 Hostname:ha-106302-m03 Clientid:01:52:54:00:e6:65:e2}
	I1205 19:26:35.050892  554371 main.go:141] libmachine: (ha-106302-m03) DBG | domain ha-106302-m03 has defined IP address 192.168.39.151 and MAC address 52:54:00:e6:65:e2 in network mk-ha-106302
	I1205 19:26:35.051029  554371 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHPort
	I1205 19:26:35.051204  554371 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHKeyPath
	I1205 19:26:35.051352  554371 main.go:141] libmachine: (ha-106302-m03) Calling .GetSSHUsername
	I1205 19:26:35.051478  554371 sshutil.go:53] new ssh client: &{IP:192.168.39.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m03/id_rsa Username:docker}
	I1205 19:26:35.138981  554371 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1205 19:26:35.195134  554371 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1205 19:26:35.252313  554371 main.go:141] libmachine: Stopping "ha-106302-m03"...
	I1205 19:26:35.252347  554371 main.go:141] libmachine: (ha-106302-m03) Calling .GetState
	I1205 19:26:35.254025  554371 main.go:141] libmachine: (ha-106302-m03) Calling .Stop
	I1205 19:26:35.257808  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 0/120
	I1205 19:26:36.259332  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 1/120
	I1205 19:26:37.261760  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 2/120
	I1205 19:26:38.263111  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 3/120
	I1205 19:26:39.265341  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 4/120
	I1205 19:26:40.267626  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 5/120
	I1205 19:26:41.269536  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 6/120
	I1205 19:26:42.271264  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 7/120
	I1205 19:26:43.272941  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 8/120
	I1205 19:26:44.274659  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 9/120
	I1205 19:26:45.276732  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 10/120
	I1205 19:26:46.278525  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 11/120
	I1205 19:26:47.280097  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 12/120
	I1205 19:26:48.281534  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 13/120
	I1205 19:26:49.282937  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 14/120
	I1205 19:26:50.285304  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 15/120
	I1205 19:26:51.286930  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 16/120
	I1205 19:26:52.288350  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 17/120
	I1205 19:26:53.289840  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 18/120
	I1205 19:26:54.291431  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 19/120
	I1205 19:26:55.293569  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 20/120
	I1205 19:26:56.296577  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 21/120
	I1205 19:26:57.298127  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 22/120
	I1205 19:26:58.299902  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 23/120
	I1205 19:26:59.301350  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 24/120
	I1205 19:27:00.303442  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 25/120
	I1205 19:27:01.305002  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 26/120
	I1205 19:27:02.306534  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 27/120
	I1205 19:27:03.308281  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 28/120
	I1205 19:27:04.310481  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 29/120
	I1205 19:27:05.312496  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 30/120
	I1205 19:27:06.314129  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 31/120
	I1205 19:27:07.315507  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 32/120
	I1205 19:27:08.317188  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 33/120
	I1205 19:27:09.318614  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 34/120
	I1205 19:27:10.320509  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 35/120
	I1205 19:27:11.322285  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 36/120
	I1205 19:27:12.323951  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 37/120
	I1205 19:27:13.325299  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 38/120
	I1205 19:27:14.326538  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 39/120
	I1205 19:27:15.328352  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 40/120
	I1205 19:27:16.329675  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 41/120
	I1205 19:27:17.331484  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 42/120
	I1205 19:27:18.332970  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 43/120
	I1205 19:27:19.334197  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 44/120
	I1205 19:27:20.335646  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 45/120
	I1205 19:27:21.337079  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 46/120
	I1205 19:27:22.338384  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 47/120
	I1205 19:27:23.340205  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 48/120
	I1205 19:27:24.341756  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 49/120
	I1205 19:27:25.343827  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 50/120
	I1205 19:27:26.345465  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 51/120
	I1205 19:27:27.347077  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 52/120
	I1205 19:27:28.348821  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 53/120
	I1205 19:27:29.351059  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 54/120
	I1205 19:27:30.353491  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 55/120
	I1205 19:27:31.354988  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 56/120
	I1205 19:27:32.357164  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 57/120
	I1205 19:27:33.358705  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 58/120
	I1205 19:27:34.360635  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 59/120
	I1205 19:27:35.362922  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 60/120
	I1205 19:27:36.364380  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 61/120
	I1205 19:27:37.365873  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 62/120
	I1205 19:27:38.367242  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 63/120
	I1205 19:27:39.368788  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 64/120
	I1205 19:27:40.370781  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 65/120
	I1205 19:27:41.372398  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 66/120
	I1205 19:27:42.373841  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 67/120
	I1205 19:27:43.375455  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 68/120
	I1205 19:27:44.376928  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 69/120
	I1205 19:27:45.378867  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 70/120
	I1205 19:27:46.380294  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 71/120
	I1205 19:27:47.381850  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 72/120
	I1205 19:27:48.383336  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 73/120
	I1205 19:27:49.384904  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 74/120
	I1205 19:27:50.387012  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 75/120
	I1205 19:27:51.388623  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 76/120
	I1205 19:27:52.390259  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 77/120
	I1205 19:27:53.391862  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 78/120
	I1205 19:27:54.393817  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 79/120
	I1205 19:27:55.395896  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 80/120
	I1205 19:27:56.397631  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 81/120
	I1205 19:27:57.398978  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 82/120
	I1205 19:27:58.400632  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 83/120
	I1205 19:27:59.401950  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 84/120
	I1205 19:28:00.403675  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 85/120
	I1205 19:28:01.405286  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 86/120
	I1205 19:28:02.406852  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 87/120
	I1205 19:28:03.408381  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 88/120
	I1205 19:28:04.409615  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 89/120
	I1205 19:28:05.411464  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 90/120
	I1205 19:28:06.412994  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 91/120
	I1205 19:28:07.414651  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 92/120
	I1205 19:28:08.416889  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 93/120
	I1205 19:28:09.418207  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 94/120
	I1205 19:28:10.419671  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 95/120
	I1205 19:28:11.421183  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 96/120
	I1205 19:28:12.422578  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 97/120
	I1205 19:28:13.424405  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 98/120
	I1205 19:28:14.425789  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 99/120
	I1205 19:28:15.427569  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 100/120
	I1205 19:28:16.429103  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 101/120
	I1205 19:28:17.430653  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 102/120
	I1205 19:28:18.432085  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 103/120
	I1205 19:28:19.433607  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 104/120
	I1205 19:28:20.435701  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 105/120
	I1205 19:28:21.437044  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 106/120
	I1205 19:28:22.438531  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 107/120
	I1205 19:28:23.439932  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 108/120
	I1205 19:28:24.441583  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 109/120
	I1205 19:28:25.443840  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 110/120
	I1205 19:28:26.445343  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 111/120
	I1205 19:28:27.446901  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 112/120
	I1205 19:28:28.448474  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 113/120
	I1205 19:28:29.449963  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 114/120
	I1205 19:28:30.451383  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 115/120
	I1205 19:28:31.452894  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 116/120
	I1205 19:28:32.454325  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 117/120
	I1205 19:28:33.455791  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 118/120
	I1205 19:28:34.457382  554371 main.go:141] libmachine: (ha-106302-m03) Waiting for machine to stop 119/120
	I1205 19:28:35.458950  554371 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1205 19:28:35.459043  554371 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1205 19:28:35.461038  554371 out.go:201] 
	W1205 19:28:35.462659  554371 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1205 19:28:35.462677  554371 out.go:270] * 
	* 
	W1205 19:28:35.465914  554371 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 19:28:35.468313  554371 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:466: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-106302 -v=7 --alsologtostderr" : exit status 82
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-106302 --wait=true -v=7 --alsologtostderr
E1205 19:28:42.714557  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:30:51.381172  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:32:14.448530  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-106302 --wait=true -v=7 --alsologtostderr: (3m55.31657328s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-106302
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-106302 -n ha-106302
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-106302 logs -n 25: (2.258587889s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                      |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-106302 cp ha-106302-m03:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m02:/home/docker/cp-test_ha-106302-m03_ha-106302-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302-m02 sudo cat                                        | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m03_ha-106302-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m03:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04:/home/docker/cp-test_ha-106302-m03_ha-106302-m04.txt             |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302-m04 sudo cat                                        | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m03_ha-106302-m04.txt                           |           |         |         |                     |                     |
	| cp      | ha-106302 cp testdata/cp-test.txt                                              | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04:/home/docker/cp-test.txt                                         |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile42720673/001/cp-test_ha-106302-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302:/home/docker/cp-test_ha-106302-m04_ha-106302.txt                     |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302 sudo cat                                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m04_ha-106302.txt                               |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m02:/home/docker/cp-test_ha-106302-m04_ha-106302-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302-m02 sudo cat                                        | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m04_ha-106302-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m03:/home/docker/cp-test_ha-106302-m04_ha-106302-m03.txt             |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302-m03 sudo cat                                        | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m04_ha-106302-m03.txt                           |           |         |         |                     |                     |
	| node    | ha-106302 node stop m02 -v=7                                                   | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | ha-106302 node start m02 -v=7                                                  | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:26 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | list -p ha-106302 -v=7                                                         | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:26 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| stop    | -p ha-106302 -v=7                                                              | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:26 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| start   | -p ha-106302 --wait=true -v=7                                                  | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:28 UTC | 05 Dec 24 19:32 UTC |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | list -p ha-106302                                                              | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:32 UTC |                     |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 19:28:35
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:28:35.527691  554879 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:28:35.527838  554879 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:28:35.527850  554879 out.go:358] Setting ErrFile to fd 2...
	I1205 19:28:35.527859  554879 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:28:35.528059  554879 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 19:28:35.528738  554879 out.go:352] Setting JSON to false
	I1205 19:28:35.529796  554879 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":7862,"bootTime":1733419054,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:28:35.529863  554879 start.go:139] virtualization: kvm guest
	I1205 19:28:35.532623  554879 out.go:177] * [ha-106302] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:28:35.534469  554879 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 19:28:35.534494  554879 notify.go:220] Checking for updates...
	I1205 19:28:35.537413  554879 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:28:35.538918  554879 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:28:35.540417  554879 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:28:35.541827  554879 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:28:35.543182  554879 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:28:35.545069  554879 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:28:35.545206  554879 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 19:28:35.545691  554879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:28:35.545751  554879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:28:35.562577  554879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37965
	I1205 19:28:35.563124  554879 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:28:35.563709  554879 main.go:141] libmachine: Using API Version  1
	I1205 19:28:35.563729  554879 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:28:35.564181  554879 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:28:35.564417  554879 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:28:35.602570  554879 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 19:28:35.603810  554879 start.go:297] selected driver: kvm2
	I1205 19:28:35.603827  554879 start.go:901] validating driver "kvm2" against &{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.7 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false def
ault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:28:35.604005  554879 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:28:35.604473  554879 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:28:35.604583  554879 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20052-530897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 19:28:35.620368  554879 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 19:28:35.621347  554879 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:28:35.621401  554879 cni.go:84] Creating CNI manager for ""
	I1205 19:28:35.621480  554879 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1205 19:28:35.621560  554879 start.go:340] cluster config:
	{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.7 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:fal
se headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:28:35.621745  554879 iso.go:125] acquiring lock: {Name:mk778929df466edaca8cb6d38427acedfae32b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:28:35.624712  554879 out.go:177] * Starting "ha-106302" primary control-plane node in "ha-106302" cluster
	I1205 19:28:35.626078  554879 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:28:35.626117  554879 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 19:28:35.626128  554879 cache.go:56] Caching tarball of preloaded images
	I1205 19:28:35.626234  554879 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 19:28:35.626248  554879 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 19:28:35.626385  554879 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:28:35.626575  554879 start.go:360] acquireMachinesLock for ha-106302: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 19:28:35.626623  554879 start.go:364] duration metric: took 27.573µs to acquireMachinesLock for "ha-106302"
	I1205 19:28:35.626644  554879 start.go:96] Skipping create...Using existing machine configuration
	I1205 19:28:35.626659  554879 fix.go:54] fixHost starting: 
	I1205 19:28:35.626932  554879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:28:35.626971  554879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:28:35.641915  554879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34661
	I1205 19:28:35.642454  554879 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:28:35.643011  554879 main.go:141] libmachine: Using API Version  1
	I1205 19:28:35.643045  554879 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:28:35.643373  554879 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:28:35.643551  554879 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:28:35.643683  554879 main.go:141] libmachine: (ha-106302) Calling .GetState
	I1205 19:28:35.645108  554879 fix.go:112] recreateIfNeeded on ha-106302: state=Running err=<nil>
	W1205 19:28:35.645131  554879 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 19:28:35.646930  554879 out.go:177] * Updating the running kvm2 "ha-106302" VM ...
	I1205 19:28:35.648224  554879 machine.go:93] provisionDockerMachine start ...
	I1205 19:28:35.648242  554879 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:28:35.648495  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:28:35.650848  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:28:35.651235  554879 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:28:35.651259  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:28:35.651393  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:28:35.651587  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:28:35.651786  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:28:35.651951  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:28:35.652146  554879 main.go:141] libmachine: Using SSH client type: native
	I1205 19:28:35.652380  554879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:28:35.652394  554879 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 19:28:35.770011  554879 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-106302
	
	I1205 19:28:35.770053  554879 main.go:141] libmachine: (ha-106302) Calling .GetMachineName
	I1205 19:28:35.770355  554879 buildroot.go:166] provisioning hostname "ha-106302"
	I1205 19:28:35.770387  554879 main.go:141] libmachine: (ha-106302) Calling .GetMachineName
	I1205 19:28:35.770633  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:28:35.773659  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:28:35.774081  554879 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:28:35.774115  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:28:35.774317  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:28:35.774514  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:28:35.774705  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:28:35.774851  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:28:35.775027  554879 main.go:141] libmachine: Using SSH client type: native
	I1205 19:28:35.775235  554879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:28:35.775255  554879 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-106302 && echo "ha-106302" | sudo tee /etc/hostname
	I1205 19:28:35.904598  554879 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-106302
	
	I1205 19:28:35.904633  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:28:35.907480  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:28:35.907776  554879 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:28:35.907807  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:28:35.907986  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:28:35.908216  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:28:35.908465  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:28:35.908629  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:28:35.908826  554879 main.go:141] libmachine: Using SSH client type: native
	I1205 19:28:35.909067  554879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:28:35.909091  554879 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-106302' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-106302/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-106302' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:28:36.025798  554879 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:28:36.025836  554879 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 19:28:36.025863  554879 buildroot.go:174] setting up certificates
	I1205 19:28:36.025875  554879 provision.go:84] configureAuth start
	I1205 19:28:36.025885  554879 main.go:141] libmachine: (ha-106302) Calling .GetMachineName
	I1205 19:28:36.026193  554879 main.go:141] libmachine: (ha-106302) Calling .GetIP
	I1205 19:28:36.029004  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:28:36.029347  554879 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:28:36.029370  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:28:36.029544  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:28:36.031867  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:28:36.032229  554879 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:28:36.032260  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:28:36.032411  554879 provision.go:143] copyHostCerts
	I1205 19:28:36.032445  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:28:36.032491  554879 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 19:28:36.032516  554879 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:28:36.032601  554879 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 19:28:36.032732  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:28:36.032764  554879 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 19:28:36.032774  554879 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:28:36.032816  554879 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 19:28:36.032896  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:28:36.032920  554879 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 19:28:36.032930  554879 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:28:36.032965  554879 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 19:28:36.033050  554879 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.ha-106302 san=[127.0.0.1 192.168.39.185 ha-106302 localhost minikube]
	I1205 19:28:36.273629  554879 provision.go:177] copyRemoteCerts
	I1205 19:28:36.273737  554879 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:28:36.273790  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:28:36.276964  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:28:36.277414  554879 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:28:36.277450  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:28:36.277642  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:28:36.277878  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:28:36.278078  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:28:36.278207  554879 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:28:36.364054  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 19:28:36.364143  554879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1205 19:28:36.395458  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 19:28:36.395622  554879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 19:28:36.427103  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 19:28:36.427177  554879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 19:28:36.456007  554879 provision.go:87] duration metric: took 430.110229ms to configureAuth
	I1205 19:28:36.456054  554879 buildroot.go:189] setting minikube options for container-runtime
	I1205 19:28:36.456322  554879 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:28:36.456401  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:28:36.459122  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:28:36.459517  554879 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:28:36.459548  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:28:36.459701  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:28:36.459932  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:28:36.460092  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:28:36.460251  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:28:36.460439  554879 main.go:141] libmachine: Using SSH client type: native
	I1205 19:28:36.460625  554879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:28:36.460643  554879 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:30:07.367134  554879 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:30:07.367178  554879 machine.go:96] duration metric: took 1m31.718940922s to provisionDockerMachine
	I1205 19:30:07.367194  554879 start.go:293] postStartSetup for "ha-106302" (driver="kvm2")
	I1205 19:30:07.367215  554879 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:30:07.367244  554879 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:30:07.367658  554879 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:30:07.367707  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:30:07.371043  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:30:07.371563  554879 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:30:07.371594  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:30:07.371786  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:30:07.372006  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:30:07.372188  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:30:07.372320  554879 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:30:07.460643  554879 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:30:07.465920  554879 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 19:30:07.465958  554879 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 19:30:07.466031  554879 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 19:30:07.466131  554879 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 19:30:07.466146  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /etc/ssl/certs/5381862.pem
	I1205 19:30:07.466244  554879 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 19:30:07.476481  554879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:30:07.502151  554879 start.go:296] duration metric: took 134.939266ms for postStartSetup
	I1205 19:30:07.502222  554879 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:30:07.502575  554879 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1205 19:30:07.502612  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:30:07.505499  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:30:07.505947  554879 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:30:07.505974  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:30:07.506195  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:30:07.506469  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:30:07.506692  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:30:07.506885  554879 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	W1205 19:30:07.592361  554879 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1205 19:30:07.592397  554879 fix.go:56] duration metric: took 1m31.965743605s for fixHost
	I1205 19:30:07.592425  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:30:07.595282  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:30:07.595705  554879 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:30:07.595741  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:30:07.595939  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:30:07.596141  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:30:07.596330  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:30:07.596464  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:30:07.596640  554879 main.go:141] libmachine: Using SSH client type: native
	I1205 19:30:07.596823  554879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:30:07.596835  554879 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 19:30:07.709615  554879 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733427007.672931795
	
	I1205 19:30:07.709641  554879 fix.go:216] guest clock: 1733427007.672931795
	I1205 19:30:07.709648  554879 fix.go:229] Guest: 2024-12-05 19:30:07.672931795 +0000 UTC Remote: 2024-12-05 19:30:07.592406126 +0000 UTC m=+92.110857371 (delta=80.525669ms)
	I1205 19:30:07.709695  554879 fix.go:200] guest clock delta is within tolerance: 80.525669ms
	I1205 19:30:07.709703  554879 start.go:83] releasing machines lock for "ha-106302", held for 1m32.08306783s
	I1205 19:30:07.709726  554879 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:30:07.709976  554879 main.go:141] libmachine: (ha-106302) Calling .GetIP
	I1205 19:30:07.712483  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:30:07.712912  554879 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:30:07.712937  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:30:07.713118  554879 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:30:07.713683  554879 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:30:07.713874  554879 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:30:07.713991  554879 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:30:07.714054  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:30:07.714073  554879 ssh_runner.go:195] Run: cat /version.json
	I1205 19:30:07.714093  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:30:07.716530  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:30:07.716718  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:30:07.716899  554879 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:30:07.716924  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:30:07.717142  554879 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:30:07.717149  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:30:07.717170  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:30:07.717278  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:30:07.717361  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:30:07.717431  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:30:07.717510  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:30:07.717578  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:30:07.717653  554879 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:30:07.717726  554879 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:30:07.819951  554879 ssh_runner.go:195] Run: systemctl --version
	I1205 19:30:07.826622  554879 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:30:07.999269  554879 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 19:30:08.008334  554879 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 19:30:08.008475  554879 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:30:08.018938  554879 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 19:30:08.018972  554879 start.go:495] detecting cgroup driver to use...
	I1205 19:30:08.019035  554879 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:30:08.036490  554879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:30:08.052073  554879 docker.go:217] disabling cri-docker service (if available) ...
	I1205 19:30:08.052152  554879 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:30:08.067358  554879 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:30:08.081765  554879 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:30:08.232512  554879 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:30:08.380609  554879 docker.go:233] disabling docker service ...
	I1205 19:30:08.380704  554879 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:30:08.400224  554879 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:30:08.415642  554879 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:30:08.563957  554879 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:30:08.712480  554879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:30:08.727242  554879 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:30:08.746369  554879 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 19:30:08.746442  554879 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:30:08.757694  554879 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:30:08.757788  554879 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:30:08.768879  554879 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:30:08.780673  554879 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:30:08.792491  554879 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:30:08.803824  554879 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:30:08.815121  554879 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:30:08.826604  554879 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:30:08.837876  554879 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:30:08.847651  554879 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:30:08.857645  554879 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:30:09.000019  554879 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:30:09.240583  554879 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:30:09.240678  554879 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:30:09.246074  554879 start.go:563] Will wait 60s for crictl version
	I1205 19:30:09.246157  554879 ssh_runner.go:195] Run: which crictl
	I1205 19:30:09.250524  554879 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:30:09.291800  554879 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 19:30:09.291929  554879 ssh_runner.go:195] Run: crio --version
	I1205 19:30:09.323936  554879 ssh_runner.go:195] Run: crio --version
	I1205 19:30:09.358907  554879 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 19:30:09.360671  554879 main.go:141] libmachine: (ha-106302) Calling .GetIP
	I1205 19:30:09.363379  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:30:09.363750  554879 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:30:09.363776  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:30:09.364045  554879 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 19:30:09.369359  554879 kubeadm.go:883] updating cluster {Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.7 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storag
eclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 19:30:09.369575  554879 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:30:09.369646  554879 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:30:09.430190  554879 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 19:30:09.430219  554879 crio.go:433] Images already preloaded, skipping extraction
	I1205 19:30:09.430275  554879 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:30:09.465600  554879 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 19:30:09.465629  554879 cache_images.go:84] Images are preloaded, skipping loading
	I1205 19:30:09.465650  554879 kubeadm.go:934] updating node { 192.168.39.185 8443 v1.31.2 crio true true} ...
	I1205 19:30:09.465782  554879 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-106302 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 19:30:09.465872  554879 ssh_runner.go:195] Run: crio config
	I1205 19:30:09.519752  554879 cni.go:84] Creating CNI manager for ""
	I1205 19:30:09.519779  554879 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1205 19:30:09.519792  554879 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 19:30:09.519821  554879 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-106302 NodeName:ha-106302 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 19:30:09.519982  554879 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-106302"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.185"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 19:30:09.520014  554879 kube-vip.go:115] generating kube-vip config ...
	I1205 19:30:09.520079  554879 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 19:30:09.532477  554879 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 19:30:09.532600  554879 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1205 19:30:09.532669  554879 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 19:30:09.543152  554879 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 19:30:09.543232  554879 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1205 19:30:09.553019  554879 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1205 19:30:09.571160  554879 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 19:30:09.588722  554879 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1205 19:30:09.607288  554879 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1205 19:30:09.626607  554879 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 19:30:09.631117  554879 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:30:09.782494  554879 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:30:09.798608  554879 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302 for IP: 192.168.39.185
	I1205 19:30:09.798643  554879 certs.go:194] generating shared ca certs ...
	I1205 19:30:09.798668  554879 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:30:09.798879  554879 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 19:30:09.798945  554879 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 19:30:09.798960  554879 certs.go:256] generating profile certs ...
	I1205 19:30:09.799068  554879 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key
	I1205 19:30:09.799107  554879 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.41d1e685
	I1205 19:30:09.799129  554879 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.41d1e685 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.185 192.168.39.22 192.168.39.151 192.168.39.254]
	I1205 19:30:09.945544  554879 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.41d1e685 ...
	I1205 19:30:09.945582  554879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.41d1e685: {Name:mk724d06bc0a47e33f486f39e278b61de9784910 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:30:09.945762  554879 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.41d1e685 ...
	I1205 19:30:09.945775  554879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.41d1e685: {Name:mkbcf4a6dca43a506ae36ad63ef2f4c9d1d6d2ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:30:09.945846  554879 certs.go:381] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.41d1e685 -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt
	I1205 19:30:09.946012  554879 certs.go:385] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.41d1e685 -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key
	I1205 19:30:09.946150  554879 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key
	I1205 19:30:09.946167  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 19:30:09.946180  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 19:30:09.946194  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 19:30:09.946205  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 19:30:09.946215  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 19:30:09.946232  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 19:30:09.946242  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 19:30:09.946250  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 19:30:09.946307  554879 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 19:30:09.946338  554879 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 19:30:09.946348  554879 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 19:30:09.946369  554879 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 19:30:09.946397  554879 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:30:09.946419  554879 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 19:30:09.946455  554879 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:30:09.946481  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:30:09.946496  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem -> /usr/share/ca-certificates/538186.pem
	I1205 19:30:09.946508  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /usr/share/ca-certificates/5381862.pem
	I1205 19:30:09.947122  554879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:30:09.974250  554879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 19:30:09.999270  554879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:30:10.024451  554879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 19:30:10.049541  554879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1205 19:30:10.090543  554879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 19:30:10.172357  554879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:30:10.209470  554879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 19:30:10.253907  554879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:30:10.287382  554879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 19:30:10.331088  554879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 19:30:10.356229  554879 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 19:30:10.375350  554879 ssh_runner.go:195] Run: openssl version
	I1205 19:30:10.386482  554879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:30:10.407968  554879 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:30:10.417315  554879 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:30:10.417410  554879 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:30:10.430973  554879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:30:10.447310  554879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 19:30:10.459696  554879 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 19:30:10.465082  554879 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 19:30:10.465150  554879 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 19:30:10.471435  554879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 19:30:10.481599  554879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 19:30:10.493473  554879 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 19:30:10.498636  554879 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 19:30:10.498699  554879 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 19:30:10.505493  554879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 19:30:10.515962  554879 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 19:30:10.520786  554879 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 19:30:10.526817  554879 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 19:30:10.532739  554879 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 19:30:10.538422  554879 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 19:30:10.544544  554879 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 19:30:10.550558  554879 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 19:30:10.556505  554879 kubeadm.go:392] StartCluster: {Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.7 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagecl
ass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:30:10.556696  554879 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 19:30:10.556799  554879 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 19:30:10.604655  554879 cri.go:89] found id: "a65bf505b34f9906f0f23b672cb884b36d128d9d24923b155e7a5a563c92cf4c"
	I1205 19:30:10.604682  554879 cri.go:89] found id: "38961b05e32f8216f32c6fd1edd38d7b637db2503dffdd56e7a329141bb7e441"
	I1205 19:30:10.604688  554879 cri.go:89] found id: "902fae35af9d2c269f7103e8a1d1a7b6c75461345d432f6499d7a05f2f65bfeb"
	I1205 19:30:10.604693  554879 cri.go:89] found id: "cf39c2cfb9e986346026b13c960f3e6b36c53e4a1f0a05d92f18314e7618bd25"
	I1205 19:30:10.604696  554879 cri.go:89] found id: "9d027dcc636efeaa92fb4d3b83f2e1b1e8f80d4f23b6e77400b693a9dd92a33d"
	I1205 19:30:10.604701  554879 cri.go:89] found id: "d7af42dff52cf31e3d0b4c5b3bb3039a69b066d99b6f46d065147ba29c75204b"
	I1205 19:30:10.604705  554879 cri.go:89] found id: "71878f2ac51cecfe539f367c2ff49f6bc6b40022a7dff189245bd007d0260d07"
	I1205 19:30:10.604709  554879 cri.go:89] found id: "8e0e4de270d59927c1fd98dfbfca5bebec8750f72b7682863f1276e5cf4afe0e"
	I1205 19:30:10.604713  554879 cri.go:89] found id: "013c8063671c4aa3ba3a414d06a2537ce811bcd6e22e028d0ad8ab9af659022d"
	I1205 19:30:10.604721  554879 cri.go:89] found id: "a639bf005af2020a5321599ccc56f99bd4c5be6aa0c227a6310955274ec60e3e"
	I1205 19:30:10.604725  554879 cri.go:89] found id: "73802addf28ef6b673245e1309d4d82c07c43374f514f1031e2a8277b4641e1a"
	I1205 19:30:10.604729  554879 cri.go:89] found id: "8d7fcd5f7d56deb9c9698f0941fa3b61d597efc9495ed27488a425d6030baa44"
	I1205 19:30:10.604750  554879 cri.go:89] found id: "dec1697264029fa87be97fc70c56ce04eba1e67864a4b1b1f1e47cba052f7cf8"
	I1205 19:30:10.604755  554879 cri.go:89] found id: "c251344563e4644b942bcb793dd412b7fae15eefbb4142b68e4047db60a8fbeb"
	I1205 19:30:10.604763  554879 cri.go:89] found id: ""
	I1205 19:30:10.604819  554879 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-106302 -n ha-106302
helpers_test.go:261: (dbg) Run:  kubectl --context ha-106302 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (360.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 stop -v=7 --alsologtostderr
E1205 19:33:15.012441  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-106302 stop -v=7 --alsologtostderr: exit status 82 (2m0.500424554s)

                                                
                                                
-- stdout --
	* Stopping node "ha-106302-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 19:32:51.407744  556652 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:32:51.407870  556652 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:32:51.407879  556652 out.go:358] Setting ErrFile to fd 2...
	I1205 19:32:51.407883  556652 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:32:51.408052  556652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 19:32:51.408301  556652 out.go:352] Setting JSON to false
	I1205 19:32:51.408392  556652 mustload.go:65] Loading cluster: ha-106302
	I1205 19:32:51.408871  556652 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:32:51.408963  556652 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:32:51.409156  556652 mustload.go:65] Loading cluster: ha-106302
	I1205 19:32:51.409285  556652 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:32:51.409311  556652 stop.go:39] StopHost: ha-106302-m04
	I1205 19:32:51.409742  556652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:32:51.409812  556652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:32:51.425408  556652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44359
	I1205 19:32:51.425999  556652 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:32:51.426653  556652 main.go:141] libmachine: Using API Version  1
	I1205 19:32:51.426678  556652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:32:51.427047  556652 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:32:51.429545  556652 out.go:177] * Stopping node "ha-106302-m04"  ...
	I1205 19:32:51.431146  556652 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1205 19:32:51.431178  556652 main.go:141] libmachine: (ha-106302-m04) Calling .DriverName
	I1205 19:32:51.431410  556652 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1205 19:32:51.431438  556652 main.go:141] libmachine: (ha-106302-m04) Calling .GetSSHHostname
	I1205 19:32:51.434638  556652 main.go:141] libmachine: (ha-106302-m04) DBG | domain ha-106302-m04 has defined MAC address 52:54:00:74:92:b5 in network mk-ha-106302
	I1205 19:32:51.435110  556652 main.go:141] libmachine: (ha-106302-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:92:b5", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:32:17 +0000 UTC Type:0 Mac:52:54:00:74:92:b5 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ha-106302-m04 Clientid:01:52:54:00:74:92:b5}
	I1205 19:32:51.435138  556652 main.go:141] libmachine: (ha-106302-m04) DBG | domain ha-106302-m04 has defined IP address 192.168.39.7 and MAC address 52:54:00:74:92:b5 in network mk-ha-106302
	I1205 19:32:51.435314  556652 main.go:141] libmachine: (ha-106302-m04) Calling .GetSSHPort
	I1205 19:32:51.435498  556652 main.go:141] libmachine: (ha-106302-m04) Calling .GetSSHKeyPath
	I1205 19:32:51.435644  556652 main.go:141] libmachine: (ha-106302-m04) Calling .GetSSHUsername
	I1205 19:32:51.435787  556652 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302-m04/id_rsa Username:docker}
	I1205 19:32:51.519891  556652 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1205 19:32:51.576572  556652 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1205 19:32:51.630735  556652 main.go:141] libmachine: Stopping "ha-106302-m04"...
	I1205 19:32:51.630768  556652 main.go:141] libmachine: (ha-106302-m04) Calling .GetState
	I1205 19:32:51.632320  556652 main.go:141] libmachine: (ha-106302-m04) Calling .Stop
	I1205 19:32:51.636412  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 0/120
	I1205 19:32:52.637744  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 1/120
	I1205 19:32:53.640596  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 2/120
	I1205 19:32:54.641935  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 3/120
	I1205 19:32:55.643406  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 4/120
	I1205 19:32:56.645833  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 5/120
	I1205 19:32:57.647163  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 6/120
	I1205 19:32:58.648662  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 7/120
	I1205 19:32:59.650900  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 8/120
	I1205 19:33:00.652766  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 9/120
	I1205 19:33:01.655041  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 10/120
	I1205 19:33:02.656390  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 11/120
	I1205 19:33:03.657906  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 12/120
	I1205 19:33:04.659288  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 13/120
	I1205 19:33:05.660711  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 14/120
	I1205 19:33:06.663047  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 15/120
	I1205 19:33:07.664387  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 16/120
	I1205 19:33:08.665979  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 17/120
	I1205 19:33:09.667377  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 18/120
	I1205 19:33:10.669067  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 19/120
	I1205 19:33:11.671122  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 20/120
	I1205 19:33:12.672585  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 21/120
	I1205 19:33:13.675009  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 22/120
	I1205 19:33:14.676661  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 23/120
	I1205 19:33:15.678269  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 24/120
	I1205 19:33:16.680434  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 25/120
	I1205 19:33:17.682064  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 26/120
	I1205 19:33:18.683860  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 27/120
	I1205 19:33:19.685403  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 28/120
	I1205 19:33:20.687471  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 29/120
	I1205 19:33:21.689895  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 30/120
	I1205 19:33:22.691204  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 31/120
	I1205 19:33:23.692800  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 32/120
	I1205 19:33:24.694946  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 33/120
	I1205 19:33:25.696955  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 34/120
	I1205 19:33:26.698986  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 35/120
	I1205 19:33:27.700492  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 36/120
	I1205 19:33:28.701896  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 37/120
	I1205 19:33:29.703162  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 38/120
	I1205 19:33:30.704707  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 39/120
	I1205 19:33:31.706881  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 40/120
	I1205 19:33:32.708289  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 41/120
	I1205 19:33:33.709733  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 42/120
	I1205 19:33:34.711961  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 43/120
	I1205 19:33:35.713513  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 44/120
	I1205 19:33:36.715528  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 45/120
	I1205 19:33:37.716942  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 46/120
	I1205 19:33:38.719147  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 47/120
	I1205 19:33:39.721517  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 48/120
	I1205 19:33:40.722863  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 49/120
	I1205 19:33:41.724989  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 50/120
	I1205 19:33:42.726844  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 51/120
	I1205 19:33:43.728437  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 52/120
	I1205 19:33:44.730615  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 53/120
	I1205 19:33:45.732192  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 54/120
	I1205 19:33:46.734125  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 55/120
	I1205 19:33:47.735672  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 56/120
	I1205 19:33:48.737036  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 57/120
	I1205 19:33:49.738317  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 58/120
	I1205 19:33:50.739743  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 59/120
	I1205 19:33:51.741741  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 60/120
	I1205 19:33:52.743191  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 61/120
	I1205 19:33:53.744780  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 62/120
	I1205 19:33:54.746209  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 63/120
	I1205 19:33:55.748557  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 64/120
	I1205 19:33:56.750820  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 65/120
	I1205 19:33:57.752233  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 66/120
	I1205 19:33:58.753864  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 67/120
	I1205 19:33:59.755277  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 68/120
	I1205 19:34:00.756660  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 69/120
	I1205 19:34:01.758843  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 70/120
	I1205 19:34:02.760792  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 71/120
	I1205 19:34:03.762933  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 72/120
	I1205 19:34:04.764553  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 73/120
	I1205 19:34:05.766134  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 74/120
	I1205 19:34:06.768534  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 75/120
	I1205 19:34:07.770830  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 76/120
	I1205 19:34:08.772060  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 77/120
	I1205 19:34:09.774003  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 78/120
	I1205 19:34:10.775468  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 79/120
	I1205 19:34:11.777115  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 80/120
	I1205 19:34:12.778867  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 81/120
	I1205 19:34:13.781248  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 82/120
	I1205 19:34:14.782848  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 83/120
	I1205 19:34:15.784954  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 84/120
	I1205 19:34:16.787048  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 85/120
	I1205 19:34:17.788472  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 86/120
	I1205 19:34:18.790810  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 87/120
	I1205 19:34:19.792615  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 88/120
	I1205 19:34:20.794846  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 89/120
	I1205 19:34:21.797212  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 90/120
	I1205 19:34:22.799472  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 91/120
	I1205 19:34:23.801773  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 92/120
	I1205 19:34:24.803049  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 93/120
	I1205 19:34:25.804385  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 94/120
	I1205 19:34:26.806127  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 95/120
	I1205 19:34:27.807389  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 96/120
	I1205 19:34:28.808538  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 97/120
	I1205 19:34:29.810730  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 98/120
	I1205 19:34:30.811947  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 99/120
	I1205 19:34:31.814108  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 100/120
	I1205 19:34:32.815631  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 101/120
	I1205 19:34:33.817283  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 102/120
	I1205 19:34:34.819001  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 103/120
	I1205 19:34:35.820570  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 104/120
	I1205 19:34:36.822633  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 105/120
	I1205 19:34:37.824788  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 106/120
	I1205 19:34:38.826226  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 107/120
	I1205 19:34:39.827614  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 108/120
	I1205 19:34:40.828920  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 109/120
	I1205 19:34:41.831176  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 110/120
	I1205 19:34:42.832694  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 111/120
	I1205 19:34:43.834772  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 112/120
	I1205 19:34:44.836310  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 113/120
	I1205 19:34:45.837921  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 114/120
	I1205 19:34:46.839678  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 115/120
	I1205 19:34:47.840904  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 116/120
	I1205 19:34:48.842351  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 117/120
	I1205 19:34:49.843993  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 118/120
	I1205 19:34:50.845359  556652 main.go:141] libmachine: (ha-106302-m04) Waiting for machine to stop 119/120
	I1205 19:34:51.846610  556652 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1205 19:34:51.846681  556652 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1205 19:34:51.848564  556652 out.go:201] 
	W1205 19:34:51.849878  556652 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1205 19:34:51.849904  556652 out.go:270] * 
	* 
	W1205 19:34:51.853244  556652 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 19:34:51.854466  556652 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:535: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-106302 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Done: out/minikube-linux-amd64 -p ha-106302 status -v=7 --alsologtostderr: (18.908896387s)
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-106302 status -v=7 --alsologtostderr": 
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-106302 status -v=7 --alsologtostderr": 
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-106302 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-106302 -n ha-106302
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-106302 logs -n 25: (2.16462086s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                      |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-106302 ssh -n ha-106302-m02 sudo cat                                        | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m03_ha-106302-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m03:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04:/home/docker/cp-test_ha-106302-m03_ha-106302-m04.txt             |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302-m04 sudo cat                                        | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m03_ha-106302-m04.txt                           |           |         |         |                     |                     |
	| cp      | ha-106302 cp testdata/cp-test.txt                                              | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04:/home/docker/cp-test.txt                                         |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile42720673/001/cp-test_ha-106302-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302:/home/docker/cp-test_ha-106302-m04_ha-106302.txt                     |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302 sudo cat                                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m04_ha-106302.txt                               |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m02:/home/docker/cp-test_ha-106302-m04_ha-106302-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302-m02 sudo cat                                        | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m04_ha-106302-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m03:/home/docker/cp-test_ha-106302-m04_ha-106302-m03.txt             |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302-m03 sudo cat                                        | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m04_ha-106302-m03.txt                           |           |         |         |                     |                     |
	| node    | ha-106302 node stop m02 -v=7                                                   | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | ha-106302 node start m02 -v=7                                                  | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:26 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | list -p ha-106302 -v=7                                                         | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:26 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| stop    | -p ha-106302 -v=7                                                              | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:26 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| start   | -p ha-106302 --wait=true -v=7                                                  | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:28 UTC | 05 Dec 24 19:32 UTC |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | list -p ha-106302                                                              | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:32 UTC |                     |
	| node    | ha-106302 node delete m03 -v=7                                                 | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:32 UTC | 05 Dec 24 19:32 UTC |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| stop    | ha-106302 stop -v=7                                                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:32 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 19:28:35
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:28:35.527691  554879 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:28:35.527838  554879 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:28:35.527850  554879 out.go:358] Setting ErrFile to fd 2...
	I1205 19:28:35.527859  554879 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:28:35.528059  554879 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 19:28:35.528738  554879 out.go:352] Setting JSON to false
	I1205 19:28:35.529796  554879 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":7862,"bootTime":1733419054,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:28:35.529863  554879 start.go:139] virtualization: kvm guest
	I1205 19:28:35.532623  554879 out.go:177] * [ha-106302] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:28:35.534469  554879 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 19:28:35.534494  554879 notify.go:220] Checking for updates...
	I1205 19:28:35.537413  554879 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:28:35.538918  554879 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:28:35.540417  554879 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:28:35.541827  554879 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:28:35.543182  554879 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:28:35.545069  554879 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:28:35.545206  554879 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 19:28:35.545691  554879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:28:35.545751  554879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:28:35.562577  554879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37965
	I1205 19:28:35.563124  554879 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:28:35.563709  554879 main.go:141] libmachine: Using API Version  1
	I1205 19:28:35.563729  554879 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:28:35.564181  554879 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:28:35.564417  554879 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:28:35.602570  554879 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 19:28:35.603810  554879 start.go:297] selected driver: kvm2
	I1205 19:28:35.603827  554879 start.go:901] validating driver "kvm2" against &{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.7 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false def
ault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:28:35.604005  554879 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:28:35.604473  554879 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:28:35.604583  554879 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20052-530897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 19:28:35.620368  554879 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 19:28:35.621347  554879 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:28:35.621401  554879 cni.go:84] Creating CNI manager for ""
	I1205 19:28:35.621480  554879 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1205 19:28:35.621560  554879 start.go:340] cluster config:
	{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.7 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:fal
se headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:28:35.621745  554879 iso.go:125] acquiring lock: {Name:mk778929df466edaca8cb6d38427acedfae32b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:28:35.624712  554879 out.go:177] * Starting "ha-106302" primary control-plane node in "ha-106302" cluster
	I1205 19:28:35.626078  554879 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:28:35.626117  554879 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 19:28:35.626128  554879 cache.go:56] Caching tarball of preloaded images
	I1205 19:28:35.626234  554879 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 19:28:35.626248  554879 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 19:28:35.626385  554879 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:28:35.626575  554879 start.go:360] acquireMachinesLock for ha-106302: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 19:28:35.626623  554879 start.go:364] duration metric: took 27.573µs to acquireMachinesLock for "ha-106302"
	I1205 19:28:35.626644  554879 start.go:96] Skipping create...Using existing machine configuration
	I1205 19:28:35.626659  554879 fix.go:54] fixHost starting: 
	I1205 19:28:35.626932  554879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:28:35.626971  554879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:28:35.641915  554879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34661
	I1205 19:28:35.642454  554879 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:28:35.643011  554879 main.go:141] libmachine: Using API Version  1
	I1205 19:28:35.643045  554879 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:28:35.643373  554879 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:28:35.643551  554879 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:28:35.643683  554879 main.go:141] libmachine: (ha-106302) Calling .GetState
	I1205 19:28:35.645108  554879 fix.go:112] recreateIfNeeded on ha-106302: state=Running err=<nil>
	W1205 19:28:35.645131  554879 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 19:28:35.646930  554879 out.go:177] * Updating the running kvm2 "ha-106302" VM ...
	I1205 19:28:35.648224  554879 machine.go:93] provisionDockerMachine start ...
	I1205 19:28:35.648242  554879 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:28:35.648495  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:28:35.650848  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:28:35.651235  554879 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:28:35.651259  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:28:35.651393  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:28:35.651587  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:28:35.651786  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:28:35.651951  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:28:35.652146  554879 main.go:141] libmachine: Using SSH client type: native
	I1205 19:28:35.652380  554879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:28:35.652394  554879 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 19:28:35.770011  554879 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-106302
	
	I1205 19:28:35.770053  554879 main.go:141] libmachine: (ha-106302) Calling .GetMachineName
	I1205 19:28:35.770355  554879 buildroot.go:166] provisioning hostname "ha-106302"
	I1205 19:28:35.770387  554879 main.go:141] libmachine: (ha-106302) Calling .GetMachineName
	I1205 19:28:35.770633  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:28:35.773659  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:28:35.774081  554879 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:28:35.774115  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:28:35.774317  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:28:35.774514  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:28:35.774705  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:28:35.774851  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:28:35.775027  554879 main.go:141] libmachine: Using SSH client type: native
	I1205 19:28:35.775235  554879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:28:35.775255  554879 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-106302 && echo "ha-106302" | sudo tee /etc/hostname
	I1205 19:28:35.904598  554879 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-106302
	
	I1205 19:28:35.904633  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:28:35.907480  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:28:35.907776  554879 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:28:35.907807  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:28:35.907986  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:28:35.908216  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:28:35.908465  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:28:35.908629  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:28:35.908826  554879 main.go:141] libmachine: Using SSH client type: native
	I1205 19:28:35.909067  554879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:28:35.909091  554879 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-106302' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-106302/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-106302' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:28:36.025798  554879 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:28:36.025836  554879 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 19:28:36.025863  554879 buildroot.go:174] setting up certificates
	I1205 19:28:36.025875  554879 provision.go:84] configureAuth start
	I1205 19:28:36.025885  554879 main.go:141] libmachine: (ha-106302) Calling .GetMachineName
	I1205 19:28:36.026193  554879 main.go:141] libmachine: (ha-106302) Calling .GetIP
	I1205 19:28:36.029004  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:28:36.029347  554879 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:28:36.029370  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:28:36.029544  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:28:36.031867  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:28:36.032229  554879 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:28:36.032260  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:28:36.032411  554879 provision.go:143] copyHostCerts
	I1205 19:28:36.032445  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:28:36.032491  554879 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 19:28:36.032516  554879 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:28:36.032601  554879 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 19:28:36.032732  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:28:36.032764  554879 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 19:28:36.032774  554879 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:28:36.032816  554879 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 19:28:36.032896  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:28:36.032920  554879 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 19:28:36.032930  554879 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:28:36.032965  554879 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 19:28:36.033050  554879 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.ha-106302 san=[127.0.0.1 192.168.39.185 ha-106302 localhost minikube]
	I1205 19:28:36.273629  554879 provision.go:177] copyRemoteCerts
	I1205 19:28:36.273737  554879 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:28:36.273790  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:28:36.276964  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:28:36.277414  554879 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:28:36.277450  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:28:36.277642  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:28:36.277878  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:28:36.278078  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:28:36.278207  554879 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:28:36.364054  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 19:28:36.364143  554879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1205 19:28:36.395458  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 19:28:36.395622  554879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 19:28:36.427103  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 19:28:36.427177  554879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 19:28:36.456007  554879 provision.go:87] duration metric: took 430.110229ms to configureAuth
	I1205 19:28:36.456054  554879 buildroot.go:189] setting minikube options for container-runtime
	I1205 19:28:36.456322  554879 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:28:36.456401  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:28:36.459122  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:28:36.459517  554879 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:28:36.459548  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:28:36.459701  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:28:36.459932  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:28:36.460092  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:28:36.460251  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:28:36.460439  554879 main.go:141] libmachine: Using SSH client type: native
	I1205 19:28:36.460625  554879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:28:36.460643  554879 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:30:07.367134  554879 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:30:07.367178  554879 machine.go:96] duration metric: took 1m31.718940922s to provisionDockerMachine
	I1205 19:30:07.367194  554879 start.go:293] postStartSetup for "ha-106302" (driver="kvm2")
	I1205 19:30:07.367215  554879 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:30:07.367244  554879 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:30:07.367658  554879 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:30:07.367707  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:30:07.371043  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:30:07.371563  554879 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:30:07.371594  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:30:07.371786  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:30:07.372006  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:30:07.372188  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:30:07.372320  554879 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:30:07.460643  554879 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:30:07.465920  554879 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 19:30:07.465958  554879 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 19:30:07.466031  554879 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 19:30:07.466131  554879 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 19:30:07.466146  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /etc/ssl/certs/5381862.pem
	I1205 19:30:07.466244  554879 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 19:30:07.476481  554879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:30:07.502151  554879 start.go:296] duration metric: took 134.939266ms for postStartSetup
	I1205 19:30:07.502222  554879 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:30:07.502575  554879 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1205 19:30:07.502612  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:30:07.505499  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:30:07.505947  554879 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:30:07.505974  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:30:07.506195  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:30:07.506469  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:30:07.506692  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:30:07.506885  554879 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	W1205 19:30:07.592361  554879 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1205 19:30:07.592397  554879 fix.go:56] duration metric: took 1m31.965743605s for fixHost
	I1205 19:30:07.592425  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:30:07.595282  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:30:07.595705  554879 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:30:07.595741  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:30:07.595939  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:30:07.596141  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:30:07.596330  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:30:07.596464  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:30:07.596640  554879 main.go:141] libmachine: Using SSH client type: native
	I1205 19:30:07.596823  554879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:30:07.596835  554879 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 19:30:07.709615  554879 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733427007.672931795
	
	I1205 19:30:07.709641  554879 fix.go:216] guest clock: 1733427007.672931795
	I1205 19:30:07.709648  554879 fix.go:229] Guest: 2024-12-05 19:30:07.672931795 +0000 UTC Remote: 2024-12-05 19:30:07.592406126 +0000 UTC m=+92.110857371 (delta=80.525669ms)
	I1205 19:30:07.709695  554879 fix.go:200] guest clock delta is within tolerance: 80.525669ms
	I1205 19:30:07.709703  554879 start.go:83] releasing machines lock for "ha-106302", held for 1m32.08306783s
	I1205 19:30:07.709726  554879 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:30:07.709976  554879 main.go:141] libmachine: (ha-106302) Calling .GetIP
	I1205 19:30:07.712483  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:30:07.712912  554879 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:30:07.712937  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:30:07.713118  554879 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:30:07.713683  554879 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:30:07.713874  554879 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:30:07.713991  554879 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:30:07.714054  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:30:07.714073  554879 ssh_runner.go:195] Run: cat /version.json
	I1205 19:30:07.714093  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:30:07.716530  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:30:07.716718  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:30:07.716899  554879 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:30:07.716924  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:30:07.717142  554879 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:30:07.717149  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:30:07.717170  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:30:07.717278  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:30:07.717361  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:30:07.717431  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:30:07.717510  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:30:07.717578  554879 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:30:07.717653  554879 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:30:07.717726  554879 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:30:07.819951  554879 ssh_runner.go:195] Run: systemctl --version
	I1205 19:30:07.826622  554879 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:30:07.999269  554879 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 19:30:08.008334  554879 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 19:30:08.008475  554879 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:30:08.018938  554879 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 19:30:08.018972  554879 start.go:495] detecting cgroup driver to use...
	I1205 19:30:08.019035  554879 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:30:08.036490  554879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:30:08.052073  554879 docker.go:217] disabling cri-docker service (if available) ...
	I1205 19:30:08.052152  554879 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:30:08.067358  554879 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:30:08.081765  554879 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:30:08.232512  554879 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:30:08.380609  554879 docker.go:233] disabling docker service ...
	I1205 19:30:08.380704  554879 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:30:08.400224  554879 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:30:08.415642  554879 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:30:08.563957  554879 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:30:08.712480  554879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:30:08.727242  554879 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:30:08.746369  554879 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 19:30:08.746442  554879 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:30:08.757694  554879 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:30:08.757788  554879 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:30:08.768879  554879 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:30:08.780673  554879 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:30:08.792491  554879 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:30:08.803824  554879 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:30:08.815121  554879 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:30:08.826604  554879 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:30:08.837876  554879 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:30:08.847651  554879 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:30:08.857645  554879 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:30:09.000019  554879 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:30:09.240583  554879 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:30:09.240678  554879 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:30:09.246074  554879 start.go:563] Will wait 60s for crictl version
	I1205 19:30:09.246157  554879 ssh_runner.go:195] Run: which crictl
	I1205 19:30:09.250524  554879 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:30:09.291800  554879 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 19:30:09.291929  554879 ssh_runner.go:195] Run: crio --version
	I1205 19:30:09.323936  554879 ssh_runner.go:195] Run: crio --version
	I1205 19:30:09.358907  554879 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 19:30:09.360671  554879 main.go:141] libmachine: (ha-106302) Calling .GetIP
	I1205 19:30:09.363379  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:30:09.363750  554879 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:30:09.363776  554879 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:30:09.364045  554879 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 19:30:09.369359  554879 kubeadm.go:883] updating cluster {Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.7 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storag
eclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 19:30:09.369575  554879 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:30:09.369646  554879 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:30:09.430190  554879 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 19:30:09.430219  554879 crio.go:433] Images already preloaded, skipping extraction
	I1205 19:30:09.430275  554879 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:30:09.465600  554879 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 19:30:09.465629  554879 cache_images.go:84] Images are preloaded, skipping loading
	I1205 19:30:09.465650  554879 kubeadm.go:934] updating node { 192.168.39.185 8443 v1.31.2 crio true true} ...
	I1205 19:30:09.465782  554879 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-106302 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 19:30:09.465872  554879 ssh_runner.go:195] Run: crio config
	I1205 19:30:09.519752  554879 cni.go:84] Creating CNI manager for ""
	I1205 19:30:09.519779  554879 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1205 19:30:09.519792  554879 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 19:30:09.519821  554879 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-106302 NodeName:ha-106302 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 19:30:09.519982  554879 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-106302"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.185"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 19:30:09.520014  554879 kube-vip.go:115] generating kube-vip config ...
	I1205 19:30:09.520079  554879 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 19:30:09.532477  554879 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 19:30:09.532600  554879 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1205 19:30:09.532669  554879 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 19:30:09.543152  554879 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 19:30:09.543232  554879 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1205 19:30:09.553019  554879 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1205 19:30:09.571160  554879 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 19:30:09.588722  554879 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1205 19:30:09.607288  554879 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1205 19:30:09.626607  554879 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 19:30:09.631117  554879 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:30:09.782494  554879 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:30:09.798608  554879 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302 for IP: 192.168.39.185
	I1205 19:30:09.798643  554879 certs.go:194] generating shared ca certs ...
	I1205 19:30:09.798668  554879 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:30:09.798879  554879 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 19:30:09.798945  554879 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 19:30:09.798960  554879 certs.go:256] generating profile certs ...
	I1205 19:30:09.799068  554879 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key
	I1205 19:30:09.799107  554879 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.41d1e685
	I1205 19:30:09.799129  554879 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.41d1e685 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.185 192.168.39.22 192.168.39.151 192.168.39.254]
	I1205 19:30:09.945544  554879 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.41d1e685 ...
	I1205 19:30:09.945582  554879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.41d1e685: {Name:mk724d06bc0a47e33f486f39e278b61de9784910 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:30:09.945762  554879 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.41d1e685 ...
	I1205 19:30:09.945775  554879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.41d1e685: {Name:mkbcf4a6dca43a506ae36ad63ef2f4c9d1d6d2ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:30:09.945846  554879 certs.go:381] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.41d1e685 -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt
	I1205 19:30:09.946012  554879 certs.go:385] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.41d1e685 -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key
	I1205 19:30:09.946150  554879 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key
	I1205 19:30:09.946167  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 19:30:09.946180  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 19:30:09.946194  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 19:30:09.946205  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 19:30:09.946215  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 19:30:09.946232  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 19:30:09.946242  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 19:30:09.946250  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 19:30:09.946307  554879 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 19:30:09.946338  554879 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 19:30:09.946348  554879 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 19:30:09.946369  554879 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 19:30:09.946397  554879 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:30:09.946419  554879 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 19:30:09.946455  554879 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:30:09.946481  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:30:09.946496  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem -> /usr/share/ca-certificates/538186.pem
	I1205 19:30:09.946508  554879 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /usr/share/ca-certificates/5381862.pem
	I1205 19:30:09.947122  554879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:30:09.974250  554879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 19:30:09.999270  554879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:30:10.024451  554879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 19:30:10.049541  554879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1205 19:30:10.090543  554879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 19:30:10.172357  554879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:30:10.209470  554879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 19:30:10.253907  554879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:30:10.287382  554879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 19:30:10.331088  554879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 19:30:10.356229  554879 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 19:30:10.375350  554879 ssh_runner.go:195] Run: openssl version
	I1205 19:30:10.386482  554879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:30:10.407968  554879 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:30:10.417315  554879 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:30:10.417410  554879 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:30:10.430973  554879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:30:10.447310  554879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 19:30:10.459696  554879 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 19:30:10.465082  554879 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 19:30:10.465150  554879 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 19:30:10.471435  554879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 19:30:10.481599  554879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 19:30:10.493473  554879 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 19:30:10.498636  554879 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 19:30:10.498699  554879 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 19:30:10.505493  554879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 19:30:10.515962  554879 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 19:30:10.520786  554879 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 19:30:10.526817  554879 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 19:30:10.532739  554879 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 19:30:10.538422  554879 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 19:30:10.544544  554879 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 19:30:10.550558  554879 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 19:30:10.556505  554879 kubeadm.go:392] StartCluster: {Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.7 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagecl
ass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:30:10.556696  554879 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 19:30:10.556799  554879 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 19:30:10.604655  554879 cri.go:89] found id: "a65bf505b34f9906f0f23b672cb884b36d128d9d24923b155e7a5a563c92cf4c"
	I1205 19:30:10.604682  554879 cri.go:89] found id: "38961b05e32f8216f32c6fd1edd38d7b637db2503dffdd56e7a329141bb7e441"
	I1205 19:30:10.604688  554879 cri.go:89] found id: "902fae35af9d2c269f7103e8a1d1a7b6c75461345d432f6499d7a05f2f65bfeb"
	I1205 19:30:10.604693  554879 cri.go:89] found id: "cf39c2cfb9e986346026b13c960f3e6b36c53e4a1f0a05d92f18314e7618bd25"
	I1205 19:30:10.604696  554879 cri.go:89] found id: "9d027dcc636efeaa92fb4d3b83f2e1b1e8f80d4f23b6e77400b693a9dd92a33d"
	I1205 19:30:10.604701  554879 cri.go:89] found id: "d7af42dff52cf31e3d0b4c5b3bb3039a69b066d99b6f46d065147ba29c75204b"
	I1205 19:30:10.604705  554879 cri.go:89] found id: "71878f2ac51cecfe539f367c2ff49f6bc6b40022a7dff189245bd007d0260d07"
	I1205 19:30:10.604709  554879 cri.go:89] found id: "8e0e4de270d59927c1fd98dfbfca5bebec8750f72b7682863f1276e5cf4afe0e"
	I1205 19:30:10.604713  554879 cri.go:89] found id: "013c8063671c4aa3ba3a414d06a2537ce811bcd6e22e028d0ad8ab9af659022d"
	I1205 19:30:10.604721  554879 cri.go:89] found id: "a639bf005af2020a5321599ccc56f99bd4c5be6aa0c227a6310955274ec60e3e"
	I1205 19:30:10.604725  554879 cri.go:89] found id: "73802addf28ef6b673245e1309d4d82c07c43374f514f1031e2a8277b4641e1a"
	I1205 19:30:10.604729  554879 cri.go:89] found id: "8d7fcd5f7d56deb9c9698f0941fa3b61d597efc9495ed27488a425d6030baa44"
	I1205 19:30:10.604750  554879 cri.go:89] found id: "dec1697264029fa87be97fc70c56ce04eba1e67864a4b1b1f1e47cba052f7cf8"
	I1205 19:30:10.604755  554879 cri.go:89] found id: "c251344563e4644b942bcb793dd412b7fae15eefbb4142b68e4047db60a8fbeb"
	I1205 19:30:10.604763  554879 cri.go:89] found id: ""
	I1205 19:30:10.604819  554879 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-106302 -n ha-106302
helpers_test.go:261: (dbg) Run:  kubectl --context ha-106302 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (836.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-106302 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1205 19:35:51.380841  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:38:15.014211  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:39:38.076191  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:40:51.381590  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:43:15.011874  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:45:51.382239  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:48:15.012468  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:48:54.450093  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-106302 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: signal: killed (13m52.067398372s)

                                                
                                                
-- stdout --
	* [ha-106302] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20052
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "ha-106302" primary control-plane node in "ha-106302" cluster
	* Updating the running kvm2 "ha-106302" VM ...
	* Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	* Enabled addons: 
	
	* Starting "ha-106302-m02" control-plane node in "ha-106302" cluster
	* Updating the running kvm2 "ha-106302-m02" VM ...
	* Found network options:
	  - NO_PROXY=192.168.39.185
	* Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.185
	* Verifying Kubernetes components...
	
	* Starting "ha-106302-m04" worker node in "ha-106302" cluster
	* Restarting existing kvm2 VM for "ha-106302-m04" ...
	* Found network options:
	  - NO_PROXY=192.168.39.185,192.168.39.22
	* Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.185
	  - env NO_PROXY=192.168.39.185,192.168.39.22
	* Verifying Kubernetes components...

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 19:35:13.608939  557310 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:35:13.609076  557310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:35:13.609087  557310 out.go:358] Setting ErrFile to fd 2...
	I1205 19:35:13.609094  557310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:35:13.609266  557310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 19:35:13.609859  557310 out.go:352] Setting JSON to false
	I1205 19:35:13.610943  557310 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":8260,"bootTime":1733419054,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:35:13.611061  557310 start.go:139] virtualization: kvm guest
	I1205 19:35:13.613524  557310 out.go:177] * [ha-106302] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:35:13.615073  557310 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 19:35:13.615129  557310 notify.go:220] Checking for updates...
	I1205 19:35:13.617901  557310 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:35:13.619505  557310 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:35:13.620895  557310 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:35:13.622196  557310 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:35:13.623598  557310 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:35:13.625506  557310 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:35:13.626132  557310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:35:13.626242  557310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:35:13.642386  557310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37915
	I1205 19:35:13.642965  557310 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:35:13.643637  557310 main.go:141] libmachine: Using API Version  1
	I1205 19:35:13.643670  557310 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:35:13.643993  557310 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:35:13.644196  557310 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:35:13.644548  557310 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 19:35:13.644847  557310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:35:13.644892  557310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:35:13.660291  557310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40487
	I1205 19:35:13.660771  557310 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:35:13.661277  557310 main.go:141] libmachine: Using API Version  1
	I1205 19:35:13.661308  557310 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:35:13.661635  557310 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:35:13.661808  557310 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:35:13.701443  557310 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 19:35:13.702909  557310 start.go:297] selected driver: kvm2
	I1205 19:35:13.702937  557310 start.go:901] validating driver "kvm2" against &{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.7 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:35:13.703160  557310 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:35:13.703625  557310 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:35:13.703724  557310 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20052-530897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 19:35:13.720236  557310 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 19:35:13.721022  557310 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:35:13.721061  557310 cni.go:84] Creating CNI manager for ""
	I1205 19:35:13.721114  557310 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1205 19:35:13.721169  557310 start.go:340] cluster config:
	{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.7 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:
false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:35:13.721305  557310 iso.go:125] acquiring lock: {Name:mk778929df466edaca8cb6d38427acedfae32b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:35:13.723934  557310 out.go:177] * Starting "ha-106302" primary control-plane node in "ha-106302" cluster
	I1205 19:35:13.725356  557310 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:35:13.725414  557310 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 19:35:13.725443  557310 cache.go:56] Caching tarball of preloaded images
	I1205 19:35:13.725565  557310 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 19:35:13.725579  557310 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 19:35:13.725751  557310 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:35:13.726047  557310 start.go:360] acquireMachinesLock for ha-106302: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 19:35:13.726116  557310 start.go:364] duration metric: took 40.253µs to acquireMachinesLock for "ha-106302"
	I1205 19:35:13.726133  557310 start.go:96] Skipping create...Using existing machine configuration
	I1205 19:35:13.726164  557310 fix.go:54] fixHost starting: 
	I1205 19:35:13.726539  557310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:35:13.726580  557310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:35:13.742166  557310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35255
	I1205 19:35:13.742597  557310 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:35:13.743105  557310 main.go:141] libmachine: Using API Version  1
	I1205 19:35:13.743133  557310 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:35:13.743459  557310 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:35:13.743664  557310 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:35:13.743794  557310 main.go:141] libmachine: (ha-106302) Calling .GetState
	I1205 19:35:13.745552  557310 fix.go:112] recreateIfNeeded on ha-106302: state=Running err=<nil>
	W1205 19:35:13.745579  557310 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 19:35:13.747689  557310 out.go:177] * Updating the running kvm2 "ha-106302" VM ...
	I1205 19:35:13.748915  557310 machine.go:93] provisionDockerMachine start ...
	I1205 19:35:13.748938  557310 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:35:13.749154  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:35:13.751449  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:35:13.751875  557310 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:35:13.751905  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:35:13.752048  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:35:13.752233  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:35:13.752422  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:35:13.752566  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:35:13.752711  557310 main.go:141] libmachine: Using SSH client type: native
	I1205 19:35:13.752976  557310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:35:13.752996  557310 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 19:35:13.869104  557310 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-106302
	
	I1205 19:35:13.869139  557310 main.go:141] libmachine: (ha-106302) Calling .GetMachineName
	I1205 19:35:13.869405  557310 buildroot.go:166] provisioning hostname "ha-106302"
	I1205 19:35:13.869434  557310 main.go:141] libmachine: (ha-106302) Calling .GetMachineName
	I1205 19:35:13.869603  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:35:13.872413  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:35:13.872860  557310 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:35:13.872890  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:35:13.873071  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:35:13.873273  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:35:13.873449  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:35:13.873633  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:35:13.873793  557310 main.go:141] libmachine: Using SSH client type: native
	I1205 19:35:13.874037  557310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:35:13.874061  557310 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-106302 && echo "ha-106302" | sudo tee /etc/hostname
	I1205 19:35:14.004391  557310 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-106302
	
	I1205 19:35:14.004443  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:35:14.007343  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:35:14.007782  557310 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:35:14.007820  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:35:14.007990  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:35:14.008181  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:35:14.008364  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:35:14.008500  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:35:14.008633  557310 main.go:141] libmachine: Using SSH client type: native
	I1205 19:35:14.008817  557310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:35:14.008835  557310 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-106302' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-106302/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-106302' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:35:14.125491  557310 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:35:14.125539  557310 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 19:35:14.125592  557310 buildroot.go:174] setting up certificates
	I1205 19:35:14.125625  557310 provision.go:84] configureAuth start
	I1205 19:35:14.125643  557310 main.go:141] libmachine: (ha-106302) Calling .GetMachineName
	I1205 19:35:14.125940  557310 main.go:141] libmachine: (ha-106302) Calling .GetIP
	I1205 19:35:14.128603  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:35:14.129008  557310 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:35:14.129033  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:35:14.129156  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:35:14.131646  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:35:14.132034  557310 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:35:14.132062  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:35:14.132219  557310 provision.go:143] copyHostCerts
	I1205 19:35:14.132260  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:35:14.132333  557310 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 19:35:14.132353  557310 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:35:14.132420  557310 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 19:35:14.132515  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:35:14.132536  557310 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 19:35:14.132543  557310 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:35:14.132570  557310 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 19:35:14.132611  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:35:14.132633  557310 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 19:35:14.132645  557310 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:35:14.132668  557310 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 19:35:14.132713  557310 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.ha-106302 san=[127.0.0.1 192.168.39.185 ha-106302 localhost minikube]
	I1205 19:35:14.394858  557310 provision.go:177] copyRemoteCerts
	I1205 19:35:14.394931  557310 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:35:14.394968  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:35:14.397777  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:35:14.398087  557310 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:35:14.398124  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:35:14.398302  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:35:14.398505  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:35:14.398650  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:35:14.398826  557310 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:35:14.487758  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 19:35:14.487898  557310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 19:35:14.517316  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 19:35:14.517408  557310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1205 19:35:14.560811  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 19:35:14.560884  557310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 19:35:14.588221  557310 provision.go:87] duration metric: took 462.576195ms to configureAuth
	I1205 19:35:14.588256  557310 buildroot.go:189] setting minikube options for container-runtime
	I1205 19:35:14.588579  557310 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:35:14.588681  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:35:14.591514  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:35:14.591865  557310 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:35:14.591893  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:35:14.592075  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:35:14.592331  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:35:14.592487  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:35:14.592655  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:35:14.592814  557310 main.go:141] libmachine: Using SSH client type: native
	I1205 19:35:14.593002  557310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:35:14.593020  557310 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:36:49.413483  557310 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:36:49.413547  557310 machine.go:96] duration metric: took 1m35.664609788s to provisionDockerMachine
	I1205 19:36:49.413572  557310 start.go:293] postStartSetup for "ha-106302" (driver="kvm2")
	I1205 19:36:49.413587  557310 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:36:49.413625  557310 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:36:49.414038  557310 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:36:49.414093  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:36:49.418151  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:36:49.418588  557310 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:36:49.418619  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:36:49.418827  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:36:49.419032  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:36:49.419259  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:36:49.419448  557310 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:36:49.511350  557310 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:36:49.516672  557310 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 19:36:49.516714  557310 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 19:36:49.516809  557310 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 19:36:49.516922  557310 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 19:36:49.516942  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /etc/ssl/certs/5381862.pem
	I1205 19:36:49.517094  557310 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 19:36:49.528556  557310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:36:49.555509  557310 start.go:296] duration metric: took 141.9189ms for postStartSetup
	I1205 19:36:49.555567  557310 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:36:49.555948  557310 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1205 19:36:49.556052  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:36:49.559436  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:36:49.559840  557310 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:36:49.559864  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:36:49.560074  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:36:49.560327  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:36:49.560519  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:36:49.560665  557310 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	W1205 19:36:49.647623  557310 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1205 19:36:49.647661  557310 fix.go:56] duration metric: took 1m35.921521076s for fixHost
	I1205 19:36:49.647694  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:36:49.650424  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:36:49.650772  557310 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:36:49.650806  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:36:49.650967  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:36:49.651200  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:36:49.651450  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:36:49.651624  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:36:49.651781  557310 main.go:141] libmachine: Using SSH client type: native
	I1205 19:36:49.651985  557310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:36:49.651998  557310 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 19:36:49.761375  557310 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733427409.722330969
	
	I1205 19:36:49.761406  557310 fix.go:216] guest clock: 1733427409.722330969
	I1205 19:36:49.761415  557310 fix.go:229] Guest: 2024-12-05 19:36:49.722330969 +0000 UTC Remote: 2024-12-05 19:36:49.647676776 +0000 UTC m=+96.080577521 (delta=74.654193ms)
	I1205 19:36:49.761468  557310 fix.go:200] guest clock delta is within tolerance: 74.654193ms
	I1205 19:36:49.761476  557310 start.go:83] releasing machines lock for "ha-106302", held for 1m36.035350243s
	I1205 19:36:49.761529  557310 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:36:49.761803  557310 main.go:141] libmachine: (ha-106302) Calling .GetIP
	I1205 19:36:49.764694  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:36:49.765167  557310 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:36:49.765191  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:36:49.765393  557310 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:36:49.765978  557310 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:36:49.766176  557310 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:36:49.766284  557310 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:36:49.766361  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:36:49.766440  557310 ssh_runner.go:195] Run: cat /version.json
	I1205 19:36:49.766471  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:36:49.768949  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:36:49.769173  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:36:49.769405  557310 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:36:49.769436  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:36:49.769619  557310 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:36:49.769652  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:36:49.769659  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:36:49.769841  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:36:49.769857  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:36:49.770008  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:36:49.770073  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:36:49.770154  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:36:49.770249  557310 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:36:49.770354  557310 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:36:49.855940  557310 ssh_runner.go:195] Run: systemctl --version
	I1205 19:36:49.906540  557310 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:36:50.162996  557310 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 19:36:50.174445  557310 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 19:36:50.174530  557310 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:36:50.189596  557310 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 19:36:50.189630  557310 start.go:495] detecting cgroup driver to use...
	I1205 19:36:50.189703  557310 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:36:50.208309  557310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:36:50.224082  557310 docker.go:217] disabling cri-docker service (if available) ...
	I1205 19:36:50.224155  557310 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:36:50.239456  557310 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:36:50.254131  557310 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:36:50.439057  557310 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:36:50.608162  557310 docker.go:233] disabling docker service ...
	I1205 19:36:50.608315  557310 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:36:50.629770  557310 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:36:50.645635  557310 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:36:50.810883  557310 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:36:50.974935  557310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:36:50.992944  557310 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:36:51.014041  557310 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 19:36:51.014129  557310 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:36:51.025522  557310 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:36:51.025613  557310 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:36:51.037015  557310 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:36:51.048787  557310 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:36:51.060389  557310 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:36:51.073332  557310 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:36:51.085041  557310 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:36:51.097189  557310 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:36:51.109461  557310 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:36:51.121322  557310 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:36:51.133701  557310 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:36:51.297254  557310 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:38:25.572139  557310 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m34.27481933s)
	I1205 19:38:25.572200  557310 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:38:25.572297  557310 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:38:25.583120  557310 start.go:563] Will wait 60s for crictl version
	I1205 19:38:25.583188  557310 ssh_runner.go:195] Run: which crictl
	I1205 19:38:25.590169  557310 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:38:25.628394  557310 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 19:38:25.628510  557310 ssh_runner.go:195] Run: crio --version
	I1205 19:38:25.659655  557310 ssh_runner.go:195] Run: crio --version
	I1205 19:38:25.692450  557310 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 19:38:25.693996  557310 main.go:141] libmachine: (ha-106302) Calling .GetIP
	I1205 19:38:25.696995  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:38:25.697331  557310 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:38:25.697363  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:38:25.697679  557310 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 19:38:25.702888  557310 kubeadm.go:883] updating cluster {Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.7 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 19:38:25.703050  557310 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:38:25.703116  557310 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:38:25.778383  557310 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 19:38:25.778409  557310 crio.go:433] Images already preloaded, skipping extraction
	I1205 19:38:25.778470  557310 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:38:25.816616  557310 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 19:38:25.816641  557310 cache_images.go:84] Images are preloaded, skipping loading
	I1205 19:38:25.816652  557310 kubeadm.go:934] updating node { 192.168.39.185 8443 v1.31.2 crio true true} ...
	I1205 19:38:25.816817  557310 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-106302 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 19:38:25.816914  557310 ssh_runner.go:195] Run: crio config
	I1205 19:38:25.868250  557310 cni.go:84] Creating CNI manager for ""
	I1205 19:38:25.868298  557310 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1205 19:38:25.868312  557310 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 19:38:25.868352  557310 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-106302 NodeName:ha-106302 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 19:38:25.868501  557310 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-106302"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.185"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 19:38:25.868521  557310 kube-vip.go:115] generating kube-vip config ...
	I1205 19:38:25.868573  557310 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 19:38:25.882084  557310 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 19:38:25.882209  557310 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1205 19:38:25.882266  557310 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 19:38:25.894192  557310 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 19:38:25.894295  557310 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1205 19:38:25.905237  557310 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1205 19:38:25.922788  557310 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 19:38:25.941038  557310 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1205 19:38:25.959304  557310 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1205 19:38:25.979506  557310 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 19:38:25.984676  557310 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:38:26.143505  557310 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:38:26.159640  557310 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302 for IP: 192.168.39.185
	I1205 19:38:26.159673  557310 certs.go:194] generating shared ca certs ...
	I1205 19:38:26.159697  557310 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:38:26.159922  557310 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 19:38:26.160007  557310 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 19:38:26.160019  557310 certs.go:256] generating profile certs ...
	I1205 19:38:26.160121  557310 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key
	I1205 19:38:26.160158  557310 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.7ff0e2df
	I1205 19:38:26.160181  557310 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.7ff0e2df with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.185 192.168.39.22 192.168.39.254]
	I1205 19:38:26.354068  557310 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.7ff0e2df ...
	I1205 19:38:26.354108  557310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.7ff0e2df: {Name:mk3e0b7825cedb74ca15ceae5a04ae49f54cb3ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:38:26.354296  557310 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.7ff0e2df ...
	I1205 19:38:26.354310  557310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.7ff0e2df: {Name:mke8becb21be3673d6efb9030d42d363dda6000c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:38:26.354382  557310 certs.go:381] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.7ff0e2df -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt
	I1205 19:38:26.354540  557310 certs.go:385] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.7ff0e2df -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key
	I1205 19:38:26.354674  557310 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key
	I1205 19:38:26.354691  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 19:38:26.354704  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 19:38:26.354715  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 19:38:26.354726  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 19:38:26.354738  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 19:38:26.354759  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 19:38:26.354771  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 19:38:26.354781  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 19:38:26.354833  557310 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 19:38:26.354861  557310 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 19:38:26.354870  557310 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 19:38:26.354897  557310 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 19:38:26.354921  557310 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:38:26.354945  557310 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 19:38:26.354989  557310 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:38:26.355016  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:38:26.355030  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem -> /usr/share/ca-certificates/538186.pem
	I1205 19:38:26.355043  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /usr/share/ca-certificates/5381862.pem
	I1205 19:38:26.355774  557310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:38:26.388406  557310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 19:38:26.417462  557310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:38:26.451021  557310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 19:38:26.483580  557310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1205 19:38:26.510181  557310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 19:38:26.538770  557310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:38:26.565811  557310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 19:38:26.592746  557310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:38:26.619843  557310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 19:38:26.647390  557310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 19:38:26.674528  557310 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 19:38:26.694559  557310 ssh_runner.go:195] Run: openssl version
	I1205 19:38:26.701465  557310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:38:26.713395  557310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:38:26.718395  557310 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:38:26.718467  557310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:38:26.724678  557310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:38:26.734863  557310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 19:38:26.748168  557310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 19:38:26.753156  557310 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 19:38:26.753218  557310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 19:38:26.759556  557310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 19:38:26.769443  557310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 19:38:26.784109  557310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 19:38:26.789240  557310 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 19:38:26.789305  557310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 19:38:26.795790  557310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 19:38:26.806806  557310 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 19:38:26.812165  557310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 19:38:26.819255  557310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 19:38:26.826398  557310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 19:38:26.832922  557310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 19:38:26.839707  557310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 19:38:26.846142  557310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 19:38:26.852538  557310 kubeadm.go:392] StartCluster: {Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.7 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:38:26.852702  557310 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 19:38:26.852749  557310 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 19:38:26.902544  557310 cri.go:89] found id: "68c51ee204eb38a76d3beb834b82b577e0b449836ad36f5c46f075920de17732"
	I1205 19:38:26.902577  557310 cri.go:89] found id: "32fe2df450a9c3d89d3262258e777533999ccd1b3dba13d865d18ba980b7ed84"
	I1205 19:38:26.902582  557310 cri.go:89] found id: "c919731ce702dca42064f6e0ada3d4683ad09f061fb0b88d01cf889107477795"
	I1205 19:38:26.902603  557310 cri.go:89] found id: "0fc17543bee06b8214ac1c280e5cb52c366fa10fc75e18b001c86e0169b81856"
	I1205 19:38:26.902608  557310 cri.go:89] found id: "b843aa8efcdc1ea1b5e09dc8e6b29dad424e9c4affbc89d637d5a7d60b1445e2"
	I1205 19:38:26.902613  557310 cri.go:89] found id: "84e3963ed85bb3a5ea031d8a1148eb2f08d1ea4c4d83ad008ee6ced6b50416ca"
	I1205 19:38:26.902617  557310 cri.go:89] found id: "fc5108482526152ad88bfc494a3bf0cee67d9a53098d14c6ba609a069c257141"
	I1205 19:38:26.902623  557310 cri.go:89] found id: "b0919086301b626233d57451a2fc83050d1c7f2645654a8df4cf9ff91941f522"
	I1205 19:38:26.902627  557310 cri.go:89] found id: "465daaace4a51bf2b449cfc51ba14245e8b8feecc525343a3ebd50e90491a498"
	I1205 19:38:26.902635  557310 cri.go:89] found id: "09848948abbcc34d17881a5af2d7991fff11355e498d698d8b9a69335b6a48da"
	I1205 19:38:26.902643  557310 cri.go:89] found id: "c843b20c132ebb58bdcd1dce1070460b290ac7096857aadbba9dd845f1480860"
	I1205 19:38:26.902651  557310 cri.go:89] found id: "75e431cbb51723de1318eeec7596ecac6d30be6779e91881edcab1537013f077"
	I1205 19:38:26.902658  557310 cri.go:89] found id: "a65bf505b34f9906f0f23b672cb884b36d128d9d24923b155e7a5a563c92cf4c"
	I1205 19:38:26.902663  557310 cri.go:89] found id: "d7af42dff52cf31e3d0b4c5b3bb3039a69b066d99b6f46d065147ba29c75204b"
	I1205 19:38:26.902674  557310 cri.go:89] found id: "71878f2ac51cecfe539f367c2ff49f6bc6b40022a7dff189245bd007d0260d07"
	I1205 19:38:26.902682  557310 cri.go:89] found id: "8e0e4de270d59927c1fd98dfbfca5bebec8750f72b7682863f1276e5cf4afe0e"
	I1205 19:38:26.902686  557310 cri.go:89] found id: "013c8063671c4aa3ba3a414d06a2537ce811bcd6e22e028d0ad8ab9af659022d"
	I1205 19:38:26.902693  557310 cri.go:89] found id: "73802addf28ef6b673245e1309d4d82c07c43374f514f1031e2a8277b4641e1a"
	I1205 19:38:26.902700  557310 cri.go:89] found id: "dec1697264029fa87be97fc70c56ce04eba1e67864a4b1b1f1e47cba052f7cf8"
	I1205 19:38:26.902704  557310 cri.go:89] found id: ""
	I1205 19:38:26.902766  557310 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-linux-amd64 start -p ha-106302 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-106302 -n ha-106302
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-106302 logs -n 25: (4.06269002s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                      |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-106302 cp ha-106302-m03:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04:/home/docker/cp-test_ha-106302-m03_ha-106302-m04.txt             |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302-m04 sudo cat                                        | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m03_ha-106302-m04.txt                           |           |         |         |                     |                     |
	| cp      | ha-106302 cp testdata/cp-test.txt                                              | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04:/home/docker/cp-test.txt                                         |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile42720673/001/cp-test_ha-106302-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302:/home/docker/cp-test_ha-106302-m04_ha-106302.txt                     |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302 sudo cat                                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m04_ha-106302.txt                               |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m02:/home/docker/cp-test_ha-106302-m04_ha-106302-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302-m02 sudo cat                                        | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m04_ha-106302-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m03:/home/docker/cp-test_ha-106302-m04_ha-106302-m03.txt             |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n                                                               | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | ha-106302-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-106302 ssh -n ha-106302-m03 sudo cat                                        | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC | 05 Dec 24 19:23 UTC |
	|         | /home/docker/cp-test_ha-106302-m04_ha-106302-m03.txt                           |           |         |         |                     |                     |
	| node    | ha-106302 node stop m02 -v=7                                                   | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:23 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | ha-106302 node start m02 -v=7                                                  | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:26 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | list -p ha-106302 -v=7                                                         | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:26 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| stop    | -p ha-106302 -v=7                                                              | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:26 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| start   | -p ha-106302 --wait=true -v=7                                                  | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:28 UTC | 05 Dec 24 19:32 UTC |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | list -p ha-106302                                                              | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:32 UTC |                     |
	| node    | ha-106302 node delete m03 -v=7                                                 | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:32 UTC | 05 Dec 24 19:32 UTC |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| stop    | ha-106302 stop -v=7                                                            | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:32 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| start   | -p ha-106302 --wait=true                                                       | ha-106302 | jenkins | v1.34.0 | 05 Dec 24 19:35 UTC |                     |
	|         | -v=7 --alsologtostderr                                                         |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                  |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                       |           |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 19:35:13
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:35:13.608939  557310 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:35:13.609076  557310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:35:13.609087  557310 out.go:358] Setting ErrFile to fd 2...
	I1205 19:35:13.609094  557310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:35:13.609266  557310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 19:35:13.609859  557310 out.go:352] Setting JSON to false
	I1205 19:35:13.610943  557310 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":8260,"bootTime":1733419054,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:35:13.611061  557310 start.go:139] virtualization: kvm guest
	I1205 19:35:13.613524  557310 out.go:177] * [ha-106302] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:35:13.615073  557310 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 19:35:13.615129  557310 notify.go:220] Checking for updates...
	I1205 19:35:13.617901  557310 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:35:13.619505  557310 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:35:13.620895  557310 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:35:13.622196  557310 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:35:13.623598  557310 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:35:13.625506  557310 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:35:13.626132  557310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:35:13.626242  557310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:35:13.642386  557310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37915
	I1205 19:35:13.642965  557310 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:35:13.643637  557310 main.go:141] libmachine: Using API Version  1
	I1205 19:35:13.643670  557310 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:35:13.643993  557310 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:35:13.644196  557310 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:35:13.644548  557310 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 19:35:13.644847  557310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:35:13.644892  557310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:35:13.660291  557310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40487
	I1205 19:35:13.660771  557310 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:35:13.661277  557310 main.go:141] libmachine: Using API Version  1
	I1205 19:35:13.661308  557310 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:35:13.661635  557310 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:35:13.661808  557310 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:35:13.701443  557310 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 19:35:13.702909  557310 start.go:297] selected driver: kvm2
	I1205 19:35:13.702937  557310 start.go:901] validating driver "kvm2" against &{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.7 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:35:13.703160  557310 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:35:13.703625  557310 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:35:13.703724  557310 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20052-530897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 19:35:13.720236  557310 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 19:35:13.721022  557310 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:35:13.721061  557310 cni.go:84] Creating CNI manager for ""
	I1205 19:35:13.721114  557310 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1205 19:35:13.721169  557310 start.go:340] cluster config:
	{Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.7 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:
false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:35:13.721305  557310 iso.go:125] acquiring lock: {Name:mk778929df466edaca8cb6d38427acedfae32b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:35:13.723934  557310 out.go:177] * Starting "ha-106302" primary control-plane node in "ha-106302" cluster
	I1205 19:35:13.725356  557310 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:35:13.725414  557310 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 19:35:13.725443  557310 cache.go:56] Caching tarball of preloaded images
	I1205 19:35:13.725565  557310 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 19:35:13.725579  557310 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 19:35:13.725751  557310 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/config.json ...
	I1205 19:35:13.726047  557310 start.go:360] acquireMachinesLock for ha-106302: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 19:35:13.726116  557310 start.go:364] duration metric: took 40.253µs to acquireMachinesLock for "ha-106302"
	I1205 19:35:13.726133  557310 start.go:96] Skipping create...Using existing machine configuration
	I1205 19:35:13.726164  557310 fix.go:54] fixHost starting: 
	I1205 19:35:13.726539  557310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:35:13.726580  557310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:35:13.742166  557310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35255
	I1205 19:35:13.742597  557310 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:35:13.743105  557310 main.go:141] libmachine: Using API Version  1
	I1205 19:35:13.743133  557310 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:35:13.743459  557310 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:35:13.743664  557310 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:35:13.743794  557310 main.go:141] libmachine: (ha-106302) Calling .GetState
	I1205 19:35:13.745552  557310 fix.go:112] recreateIfNeeded on ha-106302: state=Running err=<nil>
	W1205 19:35:13.745579  557310 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 19:35:13.747689  557310 out.go:177] * Updating the running kvm2 "ha-106302" VM ...
	I1205 19:35:13.748915  557310 machine.go:93] provisionDockerMachine start ...
	I1205 19:35:13.748938  557310 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:35:13.749154  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:35:13.751449  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:35:13.751875  557310 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:35:13.751905  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:35:13.752048  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:35:13.752233  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:35:13.752422  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:35:13.752566  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:35:13.752711  557310 main.go:141] libmachine: Using SSH client type: native
	I1205 19:35:13.752976  557310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:35:13.752996  557310 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 19:35:13.869104  557310 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-106302
	
	I1205 19:35:13.869139  557310 main.go:141] libmachine: (ha-106302) Calling .GetMachineName
	I1205 19:35:13.869405  557310 buildroot.go:166] provisioning hostname "ha-106302"
	I1205 19:35:13.869434  557310 main.go:141] libmachine: (ha-106302) Calling .GetMachineName
	I1205 19:35:13.869603  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:35:13.872413  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:35:13.872860  557310 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:35:13.872890  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:35:13.873071  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:35:13.873273  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:35:13.873449  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:35:13.873633  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:35:13.873793  557310 main.go:141] libmachine: Using SSH client type: native
	I1205 19:35:13.874037  557310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:35:13.874061  557310 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-106302 && echo "ha-106302" | sudo tee /etc/hostname
	I1205 19:35:14.004391  557310 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-106302
	
	I1205 19:35:14.004443  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:35:14.007343  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:35:14.007782  557310 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:35:14.007820  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:35:14.007990  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:35:14.008181  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:35:14.008364  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:35:14.008500  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:35:14.008633  557310 main.go:141] libmachine: Using SSH client type: native
	I1205 19:35:14.008817  557310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:35:14.008835  557310 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-106302' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-106302/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-106302' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:35:14.125491  557310 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:35:14.125539  557310 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 19:35:14.125592  557310 buildroot.go:174] setting up certificates
	I1205 19:35:14.125625  557310 provision.go:84] configureAuth start
	I1205 19:35:14.125643  557310 main.go:141] libmachine: (ha-106302) Calling .GetMachineName
	I1205 19:35:14.125940  557310 main.go:141] libmachine: (ha-106302) Calling .GetIP
	I1205 19:35:14.128603  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:35:14.129008  557310 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:35:14.129033  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:35:14.129156  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:35:14.131646  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:35:14.132034  557310 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:35:14.132062  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:35:14.132219  557310 provision.go:143] copyHostCerts
	I1205 19:35:14.132260  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:35:14.132333  557310 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 19:35:14.132353  557310 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:35:14.132420  557310 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 19:35:14.132515  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:35:14.132536  557310 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 19:35:14.132543  557310 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:35:14.132570  557310 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 19:35:14.132611  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:35:14.132633  557310 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 19:35:14.132645  557310 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:35:14.132668  557310 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 19:35:14.132713  557310 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.ha-106302 san=[127.0.0.1 192.168.39.185 ha-106302 localhost minikube]
	I1205 19:35:14.394858  557310 provision.go:177] copyRemoteCerts
	I1205 19:35:14.394931  557310 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:35:14.394968  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:35:14.397777  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:35:14.398087  557310 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:35:14.398124  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:35:14.398302  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:35:14.398505  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:35:14.398650  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:35:14.398826  557310 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:35:14.487758  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 19:35:14.487898  557310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 19:35:14.517316  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 19:35:14.517408  557310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1205 19:35:14.560811  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 19:35:14.560884  557310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 19:35:14.588221  557310 provision.go:87] duration metric: took 462.576195ms to configureAuth
	I1205 19:35:14.588256  557310 buildroot.go:189] setting minikube options for container-runtime
	I1205 19:35:14.588579  557310 config.go:182] Loaded profile config "ha-106302": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:35:14.588681  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:35:14.591514  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:35:14.591865  557310 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:35:14.591893  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:35:14.592075  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:35:14.592331  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:35:14.592487  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:35:14.592655  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:35:14.592814  557310 main.go:141] libmachine: Using SSH client type: native
	I1205 19:35:14.593002  557310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:35:14.593020  557310 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:36:49.413483  557310 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:36:49.413547  557310 machine.go:96] duration metric: took 1m35.664609788s to provisionDockerMachine
	I1205 19:36:49.413572  557310 start.go:293] postStartSetup for "ha-106302" (driver="kvm2")
	I1205 19:36:49.413587  557310 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:36:49.413625  557310 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:36:49.414038  557310 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:36:49.414093  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:36:49.418151  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:36:49.418588  557310 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:36:49.418619  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:36:49.418827  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:36:49.419032  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:36:49.419259  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:36:49.419448  557310 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:36:49.511350  557310 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:36:49.516672  557310 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 19:36:49.516714  557310 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 19:36:49.516809  557310 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 19:36:49.516922  557310 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 19:36:49.516942  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /etc/ssl/certs/5381862.pem
	I1205 19:36:49.517094  557310 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 19:36:49.528556  557310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:36:49.555509  557310 start.go:296] duration metric: took 141.9189ms for postStartSetup
	I1205 19:36:49.555567  557310 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:36:49.555948  557310 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1205 19:36:49.556052  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:36:49.559436  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:36:49.559840  557310 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:36:49.559864  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:36:49.560074  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:36:49.560327  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:36:49.560519  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:36:49.560665  557310 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	W1205 19:36:49.647623  557310 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1205 19:36:49.647661  557310 fix.go:56] duration metric: took 1m35.921521076s for fixHost
	I1205 19:36:49.647694  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:36:49.650424  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:36:49.650772  557310 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:36:49.650806  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:36:49.650967  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:36:49.651200  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:36:49.651450  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:36:49.651624  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:36:49.651781  557310 main.go:141] libmachine: Using SSH client type: native
	I1205 19:36:49.651985  557310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I1205 19:36:49.651998  557310 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 19:36:49.761375  557310 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733427409.722330969
	
	I1205 19:36:49.761406  557310 fix.go:216] guest clock: 1733427409.722330969
	I1205 19:36:49.761415  557310 fix.go:229] Guest: 2024-12-05 19:36:49.722330969 +0000 UTC Remote: 2024-12-05 19:36:49.647676776 +0000 UTC m=+96.080577521 (delta=74.654193ms)
	I1205 19:36:49.761468  557310 fix.go:200] guest clock delta is within tolerance: 74.654193ms
	I1205 19:36:49.761476  557310 start.go:83] releasing machines lock for "ha-106302", held for 1m36.035350243s
	I1205 19:36:49.761529  557310 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:36:49.761803  557310 main.go:141] libmachine: (ha-106302) Calling .GetIP
	I1205 19:36:49.764694  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:36:49.765167  557310 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:36:49.765191  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:36:49.765393  557310 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:36:49.765978  557310 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:36:49.766176  557310 main.go:141] libmachine: (ha-106302) Calling .DriverName
	I1205 19:36:49.766284  557310 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:36:49.766361  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:36:49.766440  557310 ssh_runner.go:195] Run: cat /version.json
	I1205 19:36:49.766471  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHHostname
	I1205 19:36:49.768949  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:36:49.769173  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:36:49.769405  557310 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:36:49.769436  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:36:49.769619  557310 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:36:49.769652  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:36:49.769659  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:36:49.769841  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHPort
	I1205 19:36:49.769857  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:36:49.770008  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHKeyPath
	I1205 19:36:49.770073  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:36:49.770154  557310 main.go:141] libmachine: (ha-106302) Calling .GetSSHUsername
	I1205 19:36:49.770249  557310 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:36:49.770354  557310 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/ha-106302/id_rsa Username:docker}
	I1205 19:36:49.855940  557310 ssh_runner.go:195] Run: systemctl --version
	I1205 19:36:49.906540  557310 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:36:50.162996  557310 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 19:36:50.174445  557310 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 19:36:50.174530  557310 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:36:50.189596  557310 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 19:36:50.189630  557310 start.go:495] detecting cgroup driver to use...
	I1205 19:36:50.189703  557310 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:36:50.208309  557310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:36:50.224082  557310 docker.go:217] disabling cri-docker service (if available) ...
	I1205 19:36:50.224155  557310 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:36:50.239456  557310 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:36:50.254131  557310 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:36:50.439057  557310 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:36:50.608162  557310 docker.go:233] disabling docker service ...
	I1205 19:36:50.608315  557310 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:36:50.629770  557310 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:36:50.645635  557310 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:36:50.810883  557310 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:36:50.974935  557310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:36:50.992944  557310 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:36:51.014041  557310 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 19:36:51.014129  557310 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:36:51.025522  557310 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:36:51.025613  557310 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:36:51.037015  557310 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:36:51.048787  557310 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:36:51.060389  557310 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:36:51.073332  557310 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:36:51.085041  557310 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:36:51.097189  557310 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:36:51.109461  557310 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:36:51.121322  557310 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:36:51.133701  557310 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:36:51.297254  557310 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:38:25.572139  557310 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m34.27481933s)
	I1205 19:38:25.572200  557310 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:38:25.572297  557310 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:38:25.583120  557310 start.go:563] Will wait 60s for crictl version
	I1205 19:38:25.583188  557310 ssh_runner.go:195] Run: which crictl
	I1205 19:38:25.590169  557310 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:38:25.628394  557310 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 19:38:25.628510  557310 ssh_runner.go:195] Run: crio --version
	I1205 19:38:25.659655  557310 ssh_runner.go:195] Run: crio --version
	I1205 19:38:25.692450  557310 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 19:38:25.693996  557310 main.go:141] libmachine: (ha-106302) Calling .GetIP
	I1205 19:38:25.696995  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:38:25.697331  557310 main.go:141] libmachine: (ha-106302) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:e4:76", ip: ""} in network mk-ha-106302: {Iface:virbr1 ExpiryTime:2024-12-05 20:19:21 +0000 UTC Type:0 Mac:52:54:00:3b:e4:76 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:ha-106302 Clientid:01:52:54:00:3b:e4:76}
	I1205 19:38:25.697363  557310 main.go:141] libmachine: (ha-106302) DBG | domain ha-106302 has defined IP address 192.168.39.185 and MAC address 52:54:00:3b:e4:76 in network mk-ha-106302
	I1205 19:38:25.697679  557310 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 19:38:25.702888  557310 kubeadm.go:883] updating cluster {Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.7 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 19:38:25.703050  557310 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:38:25.703116  557310 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:38:25.778383  557310 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 19:38:25.778409  557310 crio.go:433] Images already preloaded, skipping extraction
	I1205 19:38:25.778470  557310 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:38:25.816616  557310 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 19:38:25.816641  557310 cache_images.go:84] Images are preloaded, skipping loading
	I1205 19:38:25.816652  557310 kubeadm.go:934] updating node { 192.168.39.185 8443 v1.31.2 crio true true} ...
	I1205 19:38:25.816817  557310 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-106302 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 19:38:25.816914  557310 ssh_runner.go:195] Run: crio config
	I1205 19:38:25.868250  557310 cni.go:84] Creating CNI manager for ""
	I1205 19:38:25.868298  557310 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1205 19:38:25.868312  557310 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 19:38:25.868352  557310 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-106302 NodeName:ha-106302 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 19:38:25.868501  557310 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-106302"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.185"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 19:38:25.868521  557310 kube-vip.go:115] generating kube-vip config ...
	I1205 19:38:25.868573  557310 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 19:38:25.882084  557310 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 19:38:25.882209  557310 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1205 19:38:25.882266  557310 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 19:38:25.894192  557310 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 19:38:25.894295  557310 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1205 19:38:25.905237  557310 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1205 19:38:25.922788  557310 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 19:38:25.941038  557310 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1205 19:38:25.959304  557310 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1205 19:38:25.979506  557310 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 19:38:25.984676  557310 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:38:26.143505  557310 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 19:38:26.159640  557310 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302 for IP: 192.168.39.185
	I1205 19:38:26.159673  557310 certs.go:194] generating shared ca certs ...
	I1205 19:38:26.159697  557310 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:38:26.159922  557310 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 19:38:26.160007  557310 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 19:38:26.160019  557310 certs.go:256] generating profile certs ...
	I1205 19:38:26.160121  557310 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/client.key
	I1205 19:38:26.160158  557310 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.7ff0e2df
	I1205 19:38:26.160181  557310 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.7ff0e2df with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.185 192.168.39.22 192.168.39.254]
	I1205 19:38:26.354068  557310 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.7ff0e2df ...
	I1205 19:38:26.354108  557310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.7ff0e2df: {Name:mk3e0b7825cedb74ca15ceae5a04ae49f54cb3ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:38:26.354296  557310 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.7ff0e2df ...
	I1205 19:38:26.354310  557310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.7ff0e2df: {Name:mke8becb21be3673d6efb9030d42d363dda6000c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:38:26.354382  557310 certs.go:381] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt.7ff0e2df -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt
	I1205 19:38:26.354540  557310 certs.go:385] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key.7ff0e2df -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key
	I1205 19:38:26.354674  557310 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key
	I1205 19:38:26.354691  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 19:38:26.354704  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 19:38:26.354715  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 19:38:26.354726  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 19:38:26.354738  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 19:38:26.354759  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 19:38:26.354771  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 19:38:26.354781  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 19:38:26.354833  557310 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 19:38:26.354861  557310 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 19:38:26.354870  557310 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 19:38:26.354897  557310 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 19:38:26.354921  557310 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:38:26.354945  557310 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 19:38:26.354989  557310 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 19:38:26.355016  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:38:26.355030  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem -> /usr/share/ca-certificates/538186.pem
	I1205 19:38:26.355043  557310 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /usr/share/ca-certificates/5381862.pem
	I1205 19:38:26.355774  557310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:38:26.388406  557310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 19:38:26.417462  557310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:38:26.451021  557310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 19:38:26.483580  557310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1205 19:38:26.510181  557310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 19:38:26.538770  557310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:38:26.565811  557310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/ha-106302/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 19:38:26.592746  557310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:38:26.619843  557310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 19:38:26.647390  557310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 19:38:26.674528  557310 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 19:38:26.694559  557310 ssh_runner.go:195] Run: openssl version
	I1205 19:38:26.701465  557310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:38:26.713395  557310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:38:26.718395  557310 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:38:26.718467  557310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:38:26.724678  557310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:38:26.734863  557310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 19:38:26.748168  557310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 19:38:26.753156  557310 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 19:38:26.753218  557310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 19:38:26.759556  557310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 19:38:26.769443  557310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 19:38:26.784109  557310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 19:38:26.789240  557310 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 19:38:26.789305  557310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 19:38:26.795790  557310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 19:38:26.806806  557310 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 19:38:26.812165  557310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 19:38:26.819255  557310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 19:38:26.826398  557310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 19:38:26.832922  557310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 19:38:26.839707  557310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 19:38:26.846142  557310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 19:38:26.852538  557310 kubeadm.go:392] StartCluster: {Name:ha-106302 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-106302 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.7 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:38:26.852702  557310 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 19:38:26.852749  557310 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 19:38:26.902544  557310 cri.go:89] found id: "68c51ee204eb38a76d3beb834b82b577e0b449836ad36f5c46f075920de17732"
	I1205 19:38:26.902577  557310 cri.go:89] found id: "32fe2df450a9c3d89d3262258e777533999ccd1b3dba13d865d18ba980b7ed84"
	I1205 19:38:26.902582  557310 cri.go:89] found id: "c919731ce702dca42064f6e0ada3d4683ad09f061fb0b88d01cf889107477795"
	I1205 19:38:26.902603  557310 cri.go:89] found id: "0fc17543bee06b8214ac1c280e5cb52c366fa10fc75e18b001c86e0169b81856"
	I1205 19:38:26.902608  557310 cri.go:89] found id: "b843aa8efcdc1ea1b5e09dc8e6b29dad424e9c4affbc89d637d5a7d60b1445e2"
	I1205 19:38:26.902613  557310 cri.go:89] found id: "84e3963ed85bb3a5ea031d8a1148eb2f08d1ea4c4d83ad008ee6ced6b50416ca"
	I1205 19:38:26.902617  557310 cri.go:89] found id: "fc5108482526152ad88bfc494a3bf0cee67d9a53098d14c6ba609a069c257141"
	I1205 19:38:26.902623  557310 cri.go:89] found id: "b0919086301b626233d57451a2fc83050d1c7f2645654a8df4cf9ff91941f522"
	I1205 19:38:26.902627  557310 cri.go:89] found id: "465daaace4a51bf2b449cfc51ba14245e8b8feecc525343a3ebd50e90491a498"
	I1205 19:38:26.902635  557310 cri.go:89] found id: "09848948abbcc34d17881a5af2d7991fff11355e498d698d8b9a69335b6a48da"
	I1205 19:38:26.902643  557310 cri.go:89] found id: "c843b20c132ebb58bdcd1dce1070460b290ac7096857aadbba9dd845f1480860"
	I1205 19:38:26.902651  557310 cri.go:89] found id: "75e431cbb51723de1318eeec7596ecac6d30be6779e91881edcab1537013f077"
	I1205 19:38:26.902658  557310 cri.go:89] found id: "a65bf505b34f9906f0f23b672cb884b36d128d9d24923b155e7a5a563c92cf4c"
	I1205 19:38:26.902663  557310 cri.go:89] found id: "d7af42dff52cf31e3d0b4c5b3bb3039a69b066d99b6f46d065147ba29c75204b"
	I1205 19:38:26.902674  557310 cri.go:89] found id: "71878f2ac51cecfe539f367c2ff49f6bc6b40022a7dff189245bd007d0260d07"
	I1205 19:38:26.902682  557310 cri.go:89] found id: "8e0e4de270d59927c1fd98dfbfca5bebec8750f72b7682863f1276e5cf4afe0e"
	I1205 19:38:26.902686  557310 cri.go:89] found id: "013c8063671c4aa3ba3a414d06a2537ce811bcd6e22e028d0ad8ab9af659022d"
	I1205 19:38:26.902693  557310 cri.go:89] found id: "73802addf28ef6b673245e1309d4d82c07c43374f514f1031e2a8277b4641e1a"
	I1205 19:38:26.902700  557310 cri.go:89] found id: "dec1697264029fa87be97fc70c56ce04eba1e67864a4b1b1f1e47cba052f7cf8"
	I1205 19:38:26.902704  557310 cri.go:89] found id: ""
	I1205 19:38:26.902766  557310 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-106302 -n ha-106302
helpers_test.go:261: (dbg) Run:  kubectl --context ha-106302 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (836.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (327.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-346389
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-346389
E1205 19:58:15.014331  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-346389: exit status 82 (2m1.988947398s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-346389-m03"  ...
	* Stopping node "multinode-346389-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-346389" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-346389 --wait=true -v=8 --alsologtostderr
E1205 20:00:51.383807  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-346389 --wait=true -v=8 --alsologtostderr: (3m22.090422199s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-346389
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-346389 -n multinode-346389
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-346389 logs -n 25: (2.275335891s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-346389 ssh -n                                                                 | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | multinode-346389-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-346389 cp multinode-346389-m02:/home/docker/cp-test.txt                       | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile122835969/001/cp-test_multinode-346389-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-346389 ssh -n                                                                 | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | multinode-346389-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-346389 cp multinode-346389-m02:/home/docker/cp-test.txt                       | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | multinode-346389:/home/docker/cp-test_multinode-346389-m02_multinode-346389.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-346389 ssh -n                                                                 | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | multinode-346389-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-346389 ssh -n multinode-346389 sudo cat                                       | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | /home/docker/cp-test_multinode-346389-m02_multinode-346389.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-346389 cp multinode-346389-m02:/home/docker/cp-test.txt                       | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | multinode-346389-m03:/home/docker/cp-test_multinode-346389-m02_multinode-346389-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-346389 ssh -n                                                                 | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | multinode-346389-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-346389 ssh -n multinode-346389-m03 sudo cat                                   | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | /home/docker/cp-test_multinode-346389-m02_multinode-346389-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-346389 cp testdata/cp-test.txt                                                | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | multinode-346389-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-346389 ssh -n                                                                 | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | multinode-346389-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-346389 cp multinode-346389-m03:/home/docker/cp-test.txt                       | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile122835969/001/cp-test_multinode-346389-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-346389 ssh -n                                                                 | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | multinode-346389-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-346389 cp multinode-346389-m03:/home/docker/cp-test.txt                       | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | multinode-346389:/home/docker/cp-test_multinode-346389-m03_multinode-346389.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-346389 ssh -n                                                                 | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | multinode-346389-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-346389 ssh -n multinode-346389 sudo cat                                       | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | /home/docker/cp-test_multinode-346389-m03_multinode-346389.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-346389 cp multinode-346389-m03:/home/docker/cp-test.txt                       | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | multinode-346389-m02:/home/docker/cp-test_multinode-346389-m03_multinode-346389-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-346389 ssh -n                                                                 | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | multinode-346389-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-346389 ssh -n multinode-346389-m02 sudo cat                                   | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | /home/docker/cp-test_multinode-346389-m03_multinode-346389-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-346389 node stop m03                                                          | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	| node    | multinode-346389 node start                                                             | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:57 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-346389                                                                | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:57 UTC |                     |
	| stop    | -p multinode-346389                                                                     | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:57 UTC |                     |
	| start   | -p multinode-346389                                                                     | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:59 UTC | 05 Dec 24 20:03 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-346389                                                                | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 20:03 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 19:59:39
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:59:39.399073  567781 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:59:39.399210  567781 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:59:39.399220  567781 out.go:358] Setting ErrFile to fd 2...
	I1205 19:59:39.399224  567781 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:59:39.399433  567781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 19:59:39.399971  567781 out.go:352] Setting JSON to false
	I1205 19:59:39.401052  567781 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":9725,"bootTime":1733419054,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:59:39.401124  567781 start.go:139] virtualization: kvm guest
	I1205 19:59:39.403691  567781 out.go:177] * [multinode-346389] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:59:39.405116  567781 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 19:59:39.405168  567781 notify.go:220] Checking for updates...
	I1205 19:59:39.407682  567781 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:59:39.409030  567781 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:59:39.410280  567781 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:59:39.411606  567781 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:59:39.413317  567781 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:59:39.415155  567781 config.go:182] Loaded profile config "multinode-346389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:59:39.415318  567781 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 19:59:39.415963  567781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:59:39.416037  567781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:59:39.432351  567781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36449
	I1205 19:59:39.433058  567781 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:59:39.433740  567781 main.go:141] libmachine: Using API Version  1
	I1205 19:59:39.433764  567781 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:59:39.434314  567781 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:59:39.434547  567781 main.go:141] libmachine: (multinode-346389) Calling .DriverName
	I1205 19:59:39.471624  567781 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 19:59:39.473034  567781 start.go:297] selected driver: kvm2
	I1205 19:59:39.473050  567781 start.go:901] validating driver "kvm2" against &{Name:multinode-346389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-346389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.125 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:59:39.473245  567781 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:59:39.473708  567781 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:59:39.473824  567781 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20052-530897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 19:59:39.490838  567781 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 19:59:39.491573  567781 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:59:39.491609  567781 cni.go:84] Creating CNI manager for ""
	I1205 19:59:39.491670  567781 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1205 19:59:39.491742  567781 start.go:340] cluster config:
	{Name:multinode-346389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-346389 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.125 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provision
er:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:59:39.491883  567781 iso.go:125] acquiring lock: {Name:mk778929df466edaca8cb6d38427acedfae32b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:59:39.494853  567781 out.go:177] * Starting "multinode-346389" primary control-plane node in "multinode-346389" cluster
	I1205 19:59:39.496260  567781 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:59:39.496374  567781 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 19:59:39.496387  567781 cache.go:56] Caching tarball of preloaded images
	I1205 19:59:39.496473  567781 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 19:59:39.496484  567781 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 19:59:39.496620  567781 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/multinode-346389/config.json ...
	I1205 19:59:39.496822  567781 start.go:360] acquireMachinesLock for multinode-346389: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 19:59:39.496873  567781 start.go:364] duration metric: took 26.528µs to acquireMachinesLock for "multinode-346389"
	I1205 19:59:39.496886  567781 start.go:96] Skipping create...Using existing machine configuration
	I1205 19:59:39.496892  567781 fix.go:54] fixHost starting: 
	I1205 19:59:39.497152  567781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:59:39.497184  567781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:59:39.512871  567781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40025
	I1205 19:59:39.513426  567781 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:59:39.513852  567781 main.go:141] libmachine: Using API Version  1
	I1205 19:59:39.513871  567781 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:59:39.514232  567781 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:59:39.514461  567781 main.go:141] libmachine: (multinode-346389) Calling .DriverName
	I1205 19:59:39.514617  567781 main.go:141] libmachine: (multinode-346389) Calling .GetState
	I1205 19:59:39.516291  567781 fix.go:112] recreateIfNeeded on multinode-346389: state=Running err=<nil>
	W1205 19:59:39.516313  567781 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 19:59:39.518292  567781 out.go:177] * Updating the running kvm2 "multinode-346389" VM ...
	I1205 19:59:39.519859  567781 machine.go:93] provisionDockerMachine start ...
	I1205 19:59:39.519887  567781 main.go:141] libmachine: (multinode-346389) Calling .DriverName
	I1205 19:59:39.520139  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHHostname
	I1205 19:59:39.522910  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:59:39.523360  567781 main.go:141] libmachine: (multinode-346389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:79:33", ip: ""} in network mk-multinode-346389: {Iface:virbr1 ExpiryTime:2024-12-05 20:54:07 +0000 UTC Type:0 Mac:52:54:00:5c:79:33 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-346389 Clientid:01:52:54:00:5c:79:33}
	I1205 19:59:39.523391  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined IP address 192.168.39.170 and MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:59:39.523535  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHPort
	I1205 19:59:39.523722  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHKeyPath
	I1205 19:59:39.523945  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHKeyPath
	I1205 19:59:39.524199  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHUsername
	I1205 19:59:39.524452  567781 main.go:141] libmachine: Using SSH client type: native
	I1205 19:59:39.524676  567781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I1205 19:59:39.524693  567781 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 19:59:39.634257  567781 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-346389
	
	I1205 19:59:39.634292  567781 main.go:141] libmachine: (multinode-346389) Calling .GetMachineName
	I1205 19:59:39.634594  567781 buildroot.go:166] provisioning hostname "multinode-346389"
	I1205 19:59:39.634626  567781 main.go:141] libmachine: (multinode-346389) Calling .GetMachineName
	I1205 19:59:39.634822  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHHostname
	I1205 19:59:39.637801  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:59:39.638266  567781 main.go:141] libmachine: (multinode-346389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:79:33", ip: ""} in network mk-multinode-346389: {Iface:virbr1 ExpiryTime:2024-12-05 20:54:07 +0000 UTC Type:0 Mac:52:54:00:5c:79:33 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-346389 Clientid:01:52:54:00:5c:79:33}
	I1205 19:59:39.638301  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined IP address 192.168.39.170 and MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:59:39.638416  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHPort
	I1205 19:59:39.638648  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHKeyPath
	I1205 19:59:39.638884  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHKeyPath
	I1205 19:59:39.639050  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHUsername
	I1205 19:59:39.639236  567781 main.go:141] libmachine: Using SSH client type: native
	I1205 19:59:39.639424  567781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I1205 19:59:39.639444  567781 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-346389 && echo "multinode-346389" | sudo tee /etc/hostname
	I1205 19:59:39.760732  567781 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-346389
	
	I1205 19:59:39.760766  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHHostname
	I1205 19:59:39.763651  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:59:39.764121  567781 main.go:141] libmachine: (multinode-346389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:79:33", ip: ""} in network mk-multinode-346389: {Iface:virbr1 ExpiryTime:2024-12-05 20:54:07 +0000 UTC Type:0 Mac:52:54:00:5c:79:33 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-346389 Clientid:01:52:54:00:5c:79:33}
	I1205 19:59:39.764157  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined IP address 192.168.39.170 and MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:59:39.764350  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHPort
	I1205 19:59:39.764549  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHKeyPath
	I1205 19:59:39.764786  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHKeyPath
	I1205 19:59:39.764960  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHUsername
	I1205 19:59:39.765148  567781 main.go:141] libmachine: Using SSH client type: native
	I1205 19:59:39.765382  567781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I1205 19:59:39.765402  567781 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-346389' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-346389/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-346389' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:59:39.873337  567781 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:59:39.873371  567781 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 19:59:39.873405  567781 buildroot.go:174] setting up certificates
	I1205 19:59:39.873415  567781 provision.go:84] configureAuth start
	I1205 19:59:39.873429  567781 main.go:141] libmachine: (multinode-346389) Calling .GetMachineName
	I1205 19:59:39.873686  567781 main.go:141] libmachine: (multinode-346389) Calling .GetIP
	I1205 19:59:39.876305  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:59:39.876678  567781 main.go:141] libmachine: (multinode-346389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:79:33", ip: ""} in network mk-multinode-346389: {Iface:virbr1 ExpiryTime:2024-12-05 20:54:07 +0000 UTC Type:0 Mac:52:54:00:5c:79:33 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-346389 Clientid:01:52:54:00:5c:79:33}
	I1205 19:59:39.876697  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined IP address 192.168.39.170 and MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:59:39.876893  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHHostname
	I1205 19:59:39.879105  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:59:39.879475  567781 main.go:141] libmachine: (multinode-346389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:79:33", ip: ""} in network mk-multinode-346389: {Iface:virbr1 ExpiryTime:2024-12-05 20:54:07 +0000 UTC Type:0 Mac:52:54:00:5c:79:33 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-346389 Clientid:01:52:54:00:5c:79:33}
	I1205 19:59:39.879512  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined IP address 192.168.39.170 and MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:59:39.879601  567781 provision.go:143] copyHostCerts
	I1205 19:59:39.879632  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:59:39.879664  567781 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 19:59:39.879683  567781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:59:39.879748  567781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 19:59:39.879823  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:59:39.879867  567781 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 19:59:39.879874  567781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:59:39.879899  567781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 19:59:39.879946  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:59:39.879962  567781 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 19:59:39.879968  567781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:59:39.879995  567781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 19:59:39.880050  567781 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.multinode-346389 san=[127.0.0.1 192.168.39.170 localhost minikube multinode-346389]
	I1205 19:59:40.032449  567781 provision.go:177] copyRemoteCerts
	I1205 19:59:40.032514  567781 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:59:40.032541  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHHostname
	I1205 19:59:40.035424  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:59:40.035897  567781 main.go:141] libmachine: (multinode-346389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:79:33", ip: ""} in network mk-multinode-346389: {Iface:virbr1 ExpiryTime:2024-12-05 20:54:07 +0000 UTC Type:0 Mac:52:54:00:5c:79:33 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-346389 Clientid:01:52:54:00:5c:79:33}
	I1205 19:59:40.035939  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined IP address 192.168.39.170 and MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:59:40.036239  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHPort
	I1205 19:59:40.036455  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHKeyPath
	I1205 19:59:40.036652  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHUsername
	I1205 19:59:40.036797  567781 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/multinode-346389/id_rsa Username:docker}
	I1205 19:59:40.118594  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 19:59:40.118688  567781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 19:59:40.145381  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 19:59:40.145448  567781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1205 19:59:40.170930  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 19:59:40.171012  567781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 19:59:40.199760  567781 provision.go:87] duration metric: took 326.326113ms to configureAuth
	I1205 19:59:40.199795  567781 buildroot.go:189] setting minikube options for container-runtime
	I1205 19:59:40.200034  567781 config.go:182] Loaded profile config "multinode-346389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:59:40.200115  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHHostname
	I1205 19:59:40.202782  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:59:40.203187  567781 main.go:141] libmachine: (multinode-346389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:79:33", ip: ""} in network mk-multinode-346389: {Iface:virbr1 ExpiryTime:2024-12-05 20:54:07 +0000 UTC Type:0 Mac:52:54:00:5c:79:33 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-346389 Clientid:01:52:54:00:5c:79:33}
	I1205 19:59:40.203223  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined IP address 192.168.39.170 and MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:59:40.203437  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHPort
	I1205 19:59:40.203658  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHKeyPath
	I1205 19:59:40.203825  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHKeyPath
	I1205 19:59:40.203942  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHUsername
	I1205 19:59:40.204140  567781 main.go:141] libmachine: Using SSH client type: native
	I1205 19:59:40.204336  567781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I1205 19:59:40.204358  567781 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:01:11.016282  567781 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:01:11.016331  567781 machine.go:96] duration metric: took 1m31.496451671s to provisionDockerMachine
	I1205 20:01:11.016349  567781 start.go:293] postStartSetup for "multinode-346389" (driver="kvm2")
	I1205 20:01:11.016376  567781 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:01:11.016398  567781 main.go:141] libmachine: (multinode-346389) Calling .DriverName
	I1205 20:01:11.016712  567781 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:01:11.016748  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHHostname
	I1205 20:01:11.020041  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 20:01:11.020451  567781 main.go:141] libmachine: (multinode-346389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:79:33", ip: ""} in network mk-multinode-346389: {Iface:virbr1 ExpiryTime:2024-12-05 20:54:07 +0000 UTC Type:0 Mac:52:54:00:5c:79:33 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-346389 Clientid:01:52:54:00:5c:79:33}
	I1205 20:01:11.020475  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined IP address 192.168.39.170 and MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 20:01:11.020611  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHPort
	I1205 20:01:11.020831  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHKeyPath
	I1205 20:01:11.021002  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHUsername
	I1205 20:01:11.021161  567781 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/multinode-346389/id_rsa Username:docker}
	I1205 20:01:11.108501  567781 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:01:11.113266  567781 command_runner.go:130] > NAME=Buildroot
	I1205 20:01:11.113295  567781 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1205 20:01:11.113302  567781 command_runner.go:130] > ID=buildroot
	I1205 20:01:11.113309  567781 command_runner.go:130] > VERSION_ID=2023.02.9
	I1205 20:01:11.113317  567781 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1205 20:01:11.113364  567781 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:01:11.113382  567781 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:01:11.113448  567781 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:01:11.113521  567781 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:01:11.113532  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /etc/ssl/certs/5381862.pem
	I1205 20:01:11.113620  567781 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:01:11.123521  567781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:01:11.149475  567781 start.go:296] duration metric: took 133.109918ms for postStartSetup
	I1205 20:01:11.149526  567781 fix.go:56] duration metric: took 1m31.6526341s for fixHost
	I1205 20:01:11.149552  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHHostname
	I1205 20:01:11.152469  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 20:01:11.152891  567781 main.go:141] libmachine: (multinode-346389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:79:33", ip: ""} in network mk-multinode-346389: {Iface:virbr1 ExpiryTime:2024-12-05 20:54:07 +0000 UTC Type:0 Mac:52:54:00:5c:79:33 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-346389 Clientid:01:52:54:00:5c:79:33}
	I1205 20:01:11.152915  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined IP address 192.168.39.170 and MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 20:01:11.153149  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHPort
	I1205 20:01:11.153385  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHKeyPath
	I1205 20:01:11.153549  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHKeyPath
	I1205 20:01:11.153691  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHUsername
	I1205 20:01:11.153901  567781 main.go:141] libmachine: Using SSH client type: native
	I1205 20:01:11.154150  567781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I1205 20:01:11.154169  567781 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:01:11.257364  567781 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733428871.234345561
	
	I1205 20:01:11.257397  567781 fix.go:216] guest clock: 1733428871.234345561
	I1205 20:01:11.257407  567781 fix.go:229] Guest: 2024-12-05 20:01:11.234345561 +0000 UTC Remote: 2024-12-05 20:01:11.149534402 +0000 UTC m=+91.792579499 (delta=84.811159ms)
	I1205 20:01:11.257462  567781 fix.go:200] guest clock delta is within tolerance: 84.811159ms
	I1205 20:01:11.257472  567781 start.go:83] releasing machines lock for "multinode-346389", held for 1m31.760590935s
	I1205 20:01:11.257523  567781 main.go:141] libmachine: (multinode-346389) Calling .DriverName
	I1205 20:01:11.257862  567781 main.go:141] libmachine: (multinode-346389) Calling .GetIP
	I1205 20:01:11.260930  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 20:01:11.261381  567781 main.go:141] libmachine: (multinode-346389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:79:33", ip: ""} in network mk-multinode-346389: {Iface:virbr1 ExpiryTime:2024-12-05 20:54:07 +0000 UTC Type:0 Mac:52:54:00:5c:79:33 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-346389 Clientid:01:52:54:00:5c:79:33}
	I1205 20:01:11.261403  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined IP address 192.168.39.170 and MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 20:01:11.261610  567781 main.go:141] libmachine: (multinode-346389) Calling .DriverName
	I1205 20:01:11.262223  567781 main.go:141] libmachine: (multinode-346389) Calling .DriverName
	I1205 20:01:11.262421  567781 main.go:141] libmachine: (multinode-346389) Calling .DriverName
	I1205 20:01:11.262525  567781 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:01:11.262574  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHHostname
	I1205 20:01:11.262706  567781 ssh_runner.go:195] Run: cat /version.json
	I1205 20:01:11.262732  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHHostname
	I1205 20:01:11.265487  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 20:01:11.265512  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 20:01:11.265940  567781 main.go:141] libmachine: (multinode-346389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:79:33", ip: ""} in network mk-multinode-346389: {Iface:virbr1 ExpiryTime:2024-12-05 20:54:07 +0000 UTC Type:0 Mac:52:54:00:5c:79:33 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-346389 Clientid:01:52:54:00:5c:79:33}
	I1205 20:01:11.266004  567781 main.go:141] libmachine: (multinode-346389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:79:33", ip: ""} in network mk-multinode-346389: {Iface:virbr1 ExpiryTime:2024-12-05 20:54:07 +0000 UTC Type:0 Mac:52:54:00:5c:79:33 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-346389 Clientid:01:52:54:00:5c:79:33}
	I1205 20:01:11.266029  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined IP address 192.168.39.170 and MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 20:01:11.266053  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined IP address 192.168.39.170 and MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 20:01:11.266130  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHPort
	I1205 20:01:11.266252  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHPort
	I1205 20:01:11.266331  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHKeyPath
	I1205 20:01:11.266423  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHKeyPath
	I1205 20:01:11.266472  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHUsername
	I1205 20:01:11.266545  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHUsername
	I1205 20:01:11.266612  567781 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/multinode-346389/id_rsa Username:docker}
	I1205 20:01:11.266665  567781 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/multinode-346389/id_rsa Username:docker}
	I1205 20:01:11.362407  567781 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1205 20:01:11.363427  567781 command_runner.go:130] > {"iso_version": "v1.34.0-1730913550-19917", "kicbase_version": "v0.0.45-1730888964-19917", "minikube_version": "v1.34.0", "commit": "72f43dde5d92c8ae490d0727dad53fb3ed6aa41e"}
	I1205 20:01:11.363572  567781 ssh_runner.go:195] Run: systemctl --version
	I1205 20:01:11.376813  567781 command_runner.go:130] > systemd 252 (252)
	I1205 20:01:11.376888  567781 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1205 20:01:11.377460  567781 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:01:11.543557  567781 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 20:01:11.552396  567781 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1205 20:01:11.552803  567781 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:01:11.552886  567781 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:01:11.563824  567781 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 20:01:11.563854  567781 start.go:495] detecting cgroup driver to use...
	I1205 20:01:11.563935  567781 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:01:11.583402  567781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:01:11.598068  567781 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:01:11.598145  567781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:01:11.613054  567781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:01:11.627534  567781 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:01:11.769698  567781 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:01:11.919319  567781 docker.go:233] disabling docker service ...
	I1205 20:01:11.919387  567781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:01:11.937422  567781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:01:11.952413  567781 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:01:12.097049  567781 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:01:12.233683  567781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:01:12.247963  567781 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:01:12.267382  567781 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1205 20:01:12.267814  567781 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:01:12.267892  567781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:01:12.279940  567781 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:01:12.280029  567781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:01:12.290636  567781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:01:12.301203  567781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:01:12.311967  567781 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:01:12.323374  567781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:01:12.335234  567781 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:01:12.347290  567781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:01:12.357956  567781 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:01:12.367536  567781 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1205 20:01:12.367700  567781 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:01:12.377244  567781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:01:12.513634  567781 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:01:12.725207  567781 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:01:12.725285  567781 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:01:12.730356  567781 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1205 20:01:12.730373  567781 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1205 20:01:12.730414  567781 command_runner.go:130] > Device: 0,22	Inode: 1290        Links: 1
	I1205 20:01:12.730427  567781 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 20:01:12.730433  567781 command_runner.go:130] > Access: 2024-12-05 20:01:12.589770690 +0000
	I1205 20:01:12.730446  567781 command_runner.go:130] > Modify: 2024-12-05 20:01:12.589770690 +0000
	I1205 20:01:12.730453  567781 command_runner.go:130] > Change: 2024-12-05 20:01:12.589770690 +0000
	I1205 20:01:12.730463  567781 command_runner.go:130] >  Birth: -
	I1205 20:01:12.730502  567781 start.go:563] Will wait 60s for crictl version
	I1205 20:01:12.730556  567781 ssh_runner.go:195] Run: which crictl
	I1205 20:01:12.734557  567781 command_runner.go:130] > /usr/bin/crictl
	I1205 20:01:12.734654  567781 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:01:12.771346  567781 command_runner.go:130] > Version:  0.1.0
	I1205 20:01:12.771373  567781 command_runner.go:130] > RuntimeName:  cri-o
	I1205 20:01:12.771380  567781 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1205 20:01:12.771389  567781 command_runner.go:130] > RuntimeApiVersion:  v1
	I1205 20:01:12.772539  567781 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:01:12.772633  567781 ssh_runner.go:195] Run: crio --version
	I1205 20:01:12.801596  567781 command_runner.go:130] > crio version 1.29.1
	I1205 20:01:12.801625  567781 command_runner.go:130] > Version:        1.29.1
	I1205 20:01:12.801634  567781 command_runner.go:130] > GitCommit:      unknown
	I1205 20:01:12.801642  567781 command_runner.go:130] > GitCommitDate:  unknown
	I1205 20:01:12.801647  567781 command_runner.go:130] > GitTreeState:   clean
	I1205 20:01:12.801659  567781 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1205 20:01:12.801672  567781 command_runner.go:130] > GoVersion:      go1.21.6
	I1205 20:01:12.801679  567781 command_runner.go:130] > Compiler:       gc
	I1205 20:01:12.801687  567781 command_runner.go:130] > Platform:       linux/amd64
	I1205 20:01:12.801695  567781 command_runner.go:130] > Linkmode:       dynamic
	I1205 20:01:12.801705  567781 command_runner.go:130] > BuildTags:      
	I1205 20:01:12.801713  567781 command_runner.go:130] >   containers_image_ostree_stub
	I1205 20:01:12.801723  567781 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1205 20:01:12.801729  567781 command_runner.go:130] >   btrfs_noversion
	I1205 20:01:12.801739  567781 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1205 20:01:12.801746  567781 command_runner.go:130] >   libdm_no_deferred_remove
	I1205 20:01:12.801754  567781 command_runner.go:130] >   seccomp
	I1205 20:01:12.801760  567781 command_runner.go:130] > LDFlags:          unknown
	I1205 20:01:12.801812  567781 command_runner.go:130] > SeccompEnabled:   true
	I1205 20:01:12.801836  567781 command_runner.go:130] > AppArmorEnabled:  false
	I1205 20:01:12.803030  567781 ssh_runner.go:195] Run: crio --version
	I1205 20:01:12.830598  567781 command_runner.go:130] > crio version 1.29.1
	I1205 20:01:12.830631  567781 command_runner.go:130] > Version:        1.29.1
	I1205 20:01:12.830640  567781 command_runner.go:130] > GitCommit:      unknown
	I1205 20:01:12.830648  567781 command_runner.go:130] > GitCommitDate:  unknown
	I1205 20:01:12.830655  567781 command_runner.go:130] > GitTreeState:   clean
	I1205 20:01:12.830664  567781 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1205 20:01:12.830671  567781 command_runner.go:130] > GoVersion:      go1.21.6
	I1205 20:01:12.830678  567781 command_runner.go:130] > Compiler:       gc
	I1205 20:01:12.830689  567781 command_runner.go:130] > Platform:       linux/amd64
	I1205 20:01:12.830696  567781 command_runner.go:130] > Linkmode:       dynamic
	I1205 20:01:12.830707  567781 command_runner.go:130] > BuildTags:      
	I1205 20:01:12.830715  567781 command_runner.go:130] >   containers_image_ostree_stub
	I1205 20:01:12.830725  567781 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1205 20:01:12.830732  567781 command_runner.go:130] >   btrfs_noversion
	I1205 20:01:12.830743  567781 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1205 20:01:12.830758  567781 command_runner.go:130] >   libdm_no_deferred_remove
	I1205 20:01:12.830767  567781 command_runner.go:130] >   seccomp
	I1205 20:01:12.830774  567781 command_runner.go:130] > LDFlags:          unknown
	I1205 20:01:12.830783  567781 command_runner.go:130] > SeccompEnabled:   true
	I1205 20:01:12.830791  567781 command_runner.go:130] > AppArmorEnabled:  false
	I1205 20:01:12.834139  567781 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:01:12.835756  567781 main.go:141] libmachine: (multinode-346389) Calling .GetIP
	I1205 20:01:12.838685  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 20:01:12.839081  567781 main.go:141] libmachine: (multinode-346389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:79:33", ip: ""} in network mk-multinode-346389: {Iface:virbr1 ExpiryTime:2024-12-05 20:54:07 +0000 UTC Type:0 Mac:52:54:00:5c:79:33 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-346389 Clientid:01:52:54:00:5c:79:33}
	I1205 20:01:12.839112  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined IP address 192.168.39.170 and MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 20:01:12.839308  567781 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:01:12.844041  567781 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1205 20:01:12.844146  567781 kubeadm.go:883] updating cluster {Name:multinode-346389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-346389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.125 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:01:12.844323  567781 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:01:12.844374  567781 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:01:12.890952  567781 command_runner.go:130] > {
	I1205 20:01:12.890983  567781 command_runner.go:130] >   "images": [
	I1205 20:01:12.890989  567781 command_runner.go:130] >     {
	I1205 20:01:12.891000  567781 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1205 20:01:12.891007  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.891014  567781 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1205 20:01:12.891020  567781 command_runner.go:130] >       ],
	I1205 20:01:12.891025  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.891037  567781 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1205 20:01:12.891065  567781 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1205 20:01:12.891074  567781 command_runner.go:130] >       ],
	I1205 20:01:12.891080  567781 command_runner.go:130] >       "size": "94965812",
	I1205 20:01:12.891089  567781 command_runner.go:130] >       "uid": null,
	I1205 20:01:12.891102  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.891113  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.891120  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.891125  567781 command_runner.go:130] >     },
	I1205 20:01:12.891130  567781 command_runner.go:130] >     {
	I1205 20:01:12.891139  567781 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1205 20:01:12.891147  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.891156  567781 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1205 20:01:12.891164  567781 command_runner.go:130] >       ],
	I1205 20:01:12.891171  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.891184  567781 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1205 20:01:12.891196  567781 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1205 20:01:12.891205  567781 command_runner.go:130] >       ],
	I1205 20:01:12.891214  567781 command_runner.go:130] >       "size": "94958644",
	I1205 20:01:12.891222  567781 command_runner.go:130] >       "uid": null,
	I1205 20:01:12.891235  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.891244  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.891253  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.891261  567781 command_runner.go:130] >     },
	I1205 20:01:12.891266  567781 command_runner.go:130] >     {
	I1205 20:01:12.891278  567781 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1205 20:01:12.891288  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.891299  567781 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1205 20:01:12.891308  567781 command_runner.go:130] >       ],
	I1205 20:01:12.891316  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.891327  567781 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1205 20:01:12.891338  567781 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1205 20:01:12.891347  567781 command_runner.go:130] >       ],
	I1205 20:01:12.891359  567781 command_runner.go:130] >       "size": "1363676",
	I1205 20:01:12.891368  567781 command_runner.go:130] >       "uid": null,
	I1205 20:01:12.891377  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.891386  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.891396  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.891419  567781 command_runner.go:130] >     },
	I1205 20:01:12.891428  567781 command_runner.go:130] >     {
	I1205 20:01:12.891437  567781 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1205 20:01:12.891445  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.891456  567781 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1205 20:01:12.891465  567781 command_runner.go:130] >       ],
	I1205 20:01:12.891475  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.891491  567781 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1205 20:01:12.891513  567781 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1205 20:01:12.891520  567781 command_runner.go:130] >       ],
	I1205 20:01:12.891527  567781 command_runner.go:130] >       "size": "31470524",
	I1205 20:01:12.891536  567781 command_runner.go:130] >       "uid": null,
	I1205 20:01:12.891543  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.891552  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.891562  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.891570  567781 command_runner.go:130] >     },
	I1205 20:01:12.891576  567781 command_runner.go:130] >     {
	I1205 20:01:12.891589  567781 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1205 20:01:12.891598  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.891608  567781 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1205 20:01:12.891615  567781 command_runner.go:130] >       ],
	I1205 20:01:12.891621  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.891634  567781 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1205 20:01:12.891648  567781 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1205 20:01:12.891656  567781 command_runner.go:130] >       ],
	I1205 20:01:12.891662  567781 command_runner.go:130] >       "size": "63273227",
	I1205 20:01:12.891670  567781 command_runner.go:130] >       "uid": null,
	I1205 20:01:12.891675  567781 command_runner.go:130] >       "username": "nonroot",
	I1205 20:01:12.891684  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.891693  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.891698  567781 command_runner.go:130] >     },
	I1205 20:01:12.891707  567781 command_runner.go:130] >     {
	I1205 20:01:12.891719  567781 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1205 20:01:12.891738  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.891749  567781 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1205 20:01:12.891758  567781 command_runner.go:130] >       ],
	I1205 20:01:12.891766  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.891780  567781 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1205 20:01:12.891795  567781 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1205 20:01:12.891805  567781 command_runner.go:130] >       ],
	I1205 20:01:12.891815  567781 command_runner.go:130] >       "size": "149009664",
	I1205 20:01:12.891824  567781 command_runner.go:130] >       "uid": {
	I1205 20:01:12.891830  567781 command_runner.go:130] >         "value": "0"
	I1205 20:01:12.891838  567781 command_runner.go:130] >       },
	I1205 20:01:12.891845  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.891855  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.891865  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.891873  567781 command_runner.go:130] >     },
	I1205 20:01:12.891878  567781 command_runner.go:130] >     {
	I1205 20:01:12.891890  567781 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1205 20:01:12.891899  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.891910  567781 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1205 20:01:12.891919  567781 command_runner.go:130] >       ],
	I1205 20:01:12.891928  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.891941  567781 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1205 20:01:12.891958  567781 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1205 20:01:12.891965  567781 command_runner.go:130] >       ],
	I1205 20:01:12.891970  567781 command_runner.go:130] >       "size": "95274464",
	I1205 20:01:12.891982  567781 command_runner.go:130] >       "uid": {
	I1205 20:01:12.891986  567781 command_runner.go:130] >         "value": "0"
	I1205 20:01:12.891990  567781 command_runner.go:130] >       },
	I1205 20:01:12.891993  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.891999  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.892005  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.892011  567781 command_runner.go:130] >     },
	I1205 20:01:12.892016  567781 command_runner.go:130] >     {
	I1205 20:01:12.892038  567781 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1205 20:01:12.892045  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.892065  567781 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1205 20:01:12.892071  567781 command_runner.go:130] >       ],
	I1205 20:01:12.892077  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.892104  567781 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1205 20:01:12.892115  567781 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1205 20:01:12.892118  567781 command_runner.go:130] >       ],
	I1205 20:01:12.892122  567781 command_runner.go:130] >       "size": "89474374",
	I1205 20:01:12.892126  567781 command_runner.go:130] >       "uid": {
	I1205 20:01:12.892129  567781 command_runner.go:130] >         "value": "0"
	I1205 20:01:12.892133  567781 command_runner.go:130] >       },
	I1205 20:01:12.892137  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.892140  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.892144  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.892147  567781 command_runner.go:130] >     },
	I1205 20:01:12.892151  567781 command_runner.go:130] >     {
	I1205 20:01:12.892156  567781 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1205 20:01:12.892177  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.892193  567781 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1205 20:01:12.892197  567781 command_runner.go:130] >       ],
	I1205 20:01:12.892202  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.892210  567781 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1205 20:01:12.892216  567781 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1205 20:01:12.892222  567781 command_runner.go:130] >       ],
	I1205 20:01:12.892226  567781 command_runner.go:130] >       "size": "92783513",
	I1205 20:01:12.892230  567781 command_runner.go:130] >       "uid": null,
	I1205 20:01:12.892235  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.892238  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.892242  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.892246  567781 command_runner.go:130] >     },
	I1205 20:01:12.892250  567781 command_runner.go:130] >     {
	I1205 20:01:12.892256  567781 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1205 20:01:12.892286  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.892295  567781 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1205 20:01:12.892302  567781 command_runner.go:130] >       ],
	I1205 20:01:12.892307  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.892314  567781 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1205 20:01:12.892322  567781 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1205 20:01:12.892326  567781 command_runner.go:130] >       ],
	I1205 20:01:12.892330  567781 command_runner.go:130] >       "size": "68457798",
	I1205 20:01:12.892335  567781 command_runner.go:130] >       "uid": {
	I1205 20:01:12.892338  567781 command_runner.go:130] >         "value": "0"
	I1205 20:01:12.892341  567781 command_runner.go:130] >       },
	I1205 20:01:12.892346  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.892350  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.892353  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.892357  567781 command_runner.go:130] >     },
	I1205 20:01:12.892360  567781 command_runner.go:130] >     {
	I1205 20:01:12.892366  567781 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1205 20:01:12.892370  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.892375  567781 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1205 20:01:12.892379  567781 command_runner.go:130] >       ],
	I1205 20:01:12.892383  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.892390  567781 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1205 20:01:12.892400  567781 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1205 20:01:12.892406  567781 command_runner.go:130] >       ],
	I1205 20:01:12.892412  567781 command_runner.go:130] >       "size": "742080",
	I1205 20:01:12.892422  567781 command_runner.go:130] >       "uid": {
	I1205 20:01:12.892429  567781 command_runner.go:130] >         "value": "65535"
	I1205 20:01:12.892435  567781 command_runner.go:130] >       },
	I1205 20:01:12.892441  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.892449  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.892455  567781 command_runner.go:130] >       "pinned": true
	I1205 20:01:12.892462  567781 command_runner.go:130] >     }
	I1205 20:01:12.892465  567781 command_runner.go:130] >   ]
	I1205 20:01:12.892474  567781 command_runner.go:130] > }
	I1205 20:01:12.892691  567781 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:01:12.892704  567781 crio.go:433] Images already preloaded, skipping extraction
	I1205 20:01:12.892758  567781 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:01:12.924244  567781 command_runner.go:130] > {
	I1205 20:01:12.924286  567781 command_runner.go:130] >   "images": [
	I1205 20:01:12.924293  567781 command_runner.go:130] >     {
	I1205 20:01:12.924306  567781 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1205 20:01:12.924314  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.924323  567781 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1205 20:01:12.924334  567781 command_runner.go:130] >       ],
	I1205 20:01:12.924342  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.924357  567781 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1205 20:01:12.924373  567781 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1205 20:01:12.924380  567781 command_runner.go:130] >       ],
	I1205 20:01:12.924388  567781 command_runner.go:130] >       "size": "94965812",
	I1205 20:01:12.924397  567781 command_runner.go:130] >       "uid": null,
	I1205 20:01:12.924403  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.924423  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.924434  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.924440  567781 command_runner.go:130] >     },
	I1205 20:01:12.924447  567781 command_runner.go:130] >     {
	I1205 20:01:12.924458  567781 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1205 20:01:12.924468  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.924477  567781 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1205 20:01:12.924486  567781 command_runner.go:130] >       ],
	I1205 20:01:12.924496  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.924511  567781 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1205 20:01:12.924527  567781 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1205 20:01:12.924537  567781 command_runner.go:130] >       ],
	I1205 20:01:12.924551  567781 command_runner.go:130] >       "size": "94958644",
	I1205 20:01:12.924561  567781 command_runner.go:130] >       "uid": null,
	I1205 20:01:12.924573  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.924582  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.924590  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.924598  567781 command_runner.go:130] >     },
	I1205 20:01:12.924606  567781 command_runner.go:130] >     {
	I1205 20:01:12.924615  567781 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1205 20:01:12.924624  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.924632  567781 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1205 20:01:12.924640  567781 command_runner.go:130] >       ],
	I1205 20:01:12.924646  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.924656  567781 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1205 20:01:12.924669  567781 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1205 20:01:12.924679  567781 command_runner.go:130] >       ],
	I1205 20:01:12.924686  567781 command_runner.go:130] >       "size": "1363676",
	I1205 20:01:12.924695  567781 command_runner.go:130] >       "uid": null,
	I1205 20:01:12.924701  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.924713  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.924722  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.924730  567781 command_runner.go:130] >     },
	I1205 20:01:12.924735  567781 command_runner.go:130] >     {
	I1205 20:01:12.924747  567781 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1205 20:01:12.924753  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.924764  567781 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1205 20:01:12.924773  567781 command_runner.go:130] >       ],
	I1205 20:01:12.924780  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.924795  567781 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1205 20:01:12.924820  567781 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1205 20:01:12.924830  567781 command_runner.go:130] >       ],
	I1205 20:01:12.924837  567781 command_runner.go:130] >       "size": "31470524",
	I1205 20:01:12.924846  567781 command_runner.go:130] >       "uid": null,
	I1205 20:01:12.924852  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.924866  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.924875  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.924880  567781 command_runner.go:130] >     },
	I1205 20:01:12.924888  567781 command_runner.go:130] >     {
	I1205 20:01:12.924897  567781 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1205 20:01:12.924906  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.924915  567781 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1205 20:01:12.924923  567781 command_runner.go:130] >       ],
	I1205 20:01:12.924929  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.924943  567781 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1205 20:01:12.924957  567781 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1205 20:01:12.924964  567781 command_runner.go:130] >       ],
	I1205 20:01:12.924973  567781 command_runner.go:130] >       "size": "63273227",
	I1205 20:01:12.924979  567781 command_runner.go:130] >       "uid": null,
	I1205 20:01:12.924989  567781 command_runner.go:130] >       "username": "nonroot",
	I1205 20:01:12.924995  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.925005  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.925011  567781 command_runner.go:130] >     },
	I1205 20:01:12.925020  567781 command_runner.go:130] >     {
	I1205 20:01:12.925030  567781 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1205 20:01:12.925040  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.925047  567781 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1205 20:01:12.925059  567781 command_runner.go:130] >       ],
	I1205 20:01:12.925073  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.925087  567781 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1205 20:01:12.925102  567781 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1205 20:01:12.925111  567781 command_runner.go:130] >       ],
	I1205 20:01:12.925118  567781 command_runner.go:130] >       "size": "149009664",
	I1205 20:01:12.925126  567781 command_runner.go:130] >       "uid": {
	I1205 20:01:12.925132  567781 command_runner.go:130] >         "value": "0"
	I1205 20:01:12.925146  567781 command_runner.go:130] >       },
	I1205 20:01:12.925154  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.925159  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.925175  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.925184  567781 command_runner.go:130] >     },
	I1205 20:01:12.925190  567781 command_runner.go:130] >     {
	I1205 20:01:12.925202  567781 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1205 20:01:12.925211  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.925219  567781 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1205 20:01:12.925227  567781 command_runner.go:130] >       ],
	I1205 20:01:12.925232  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.925246  567781 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1205 20:01:12.925259  567781 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1205 20:01:12.925265  567781 command_runner.go:130] >       ],
	I1205 20:01:12.925275  567781 command_runner.go:130] >       "size": "95274464",
	I1205 20:01:12.925281  567781 command_runner.go:130] >       "uid": {
	I1205 20:01:12.925291  567781 command_runner.go:130] >         "value": "0"
	I1205 20:01:12.925297  567781 command_runner.go:130] >       },
	I1205 20:01:12.925306  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.925311  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.925320  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.925325  567781 command_runner.go:130] >     },
	I1205 20:01:12.925330  567781 command_runner.go:130] >     {
	I1205 20:01:12.925342  567781 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1205 20:01:12.925351  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.925362  567781 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1205 20:01:12.925369  567781 command_runner.go:130] >       ],
	I1205 20:01:12.925374  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.925411  567781 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1205 20:01:12.925427  567781 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1205 20:01:12.925433  567781 command_runner.go:130] >       ],
	I1205 20:01:12.925438  567781 command_runner.go:130] >       "size": "89474374",
	I1205 20:01:12.925447  567781 command_runner.go:130] >       "uid": {
	I1205 20:01:12.925453  567781 command_runner.go:130] >         "value": "0"
	I1205 20:01:12.925458  567781 command_runner.go:130] >       },
	I1205 20:01:12.925466  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.925480  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.925491  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.925496  567781 command_runner.go:130] >     },
	I1205 20:01:12.925506  567781 command_runner.go:130] >     {
	I1205 20:01:12.925515  567781 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1205 20:01:12.925525  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.925536  567781 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1205 20:01:12.925544  567781 command_runner.go:130] >       ],
	I1205 20:01:12.925551  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.925562  567781 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1205 20:01:12.925582  567781 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1205 20:01:12.925591  567781 command_runner.go:130] >       ],
	I1205 20:01:12.925598  567781 command_runner.go:130] >       "size": "92783513",
	I1205 20:01:12.925608  567781 command_runner.go:130] >       "uid": null,
	I1205 20:01:12.925615  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.925624  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.925633  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.925639  567781 command_runner.go:130] >     },
	I1205 20:01:12.925648  567781 command_runner.go:130] >     {
	I1205 20:01:12.925656  567781 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1205 20:01:12.925666  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.925673  567781 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1205 20:01:12.925682  567781 command_runner.go:130] >       ],
	I1205 20:01:12.925689  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.925703  567781 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1205 20:01:12.925723  567781 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1205 20:01:12.925727  567781 command_runner.go:130] >       ],
	I1205 20:01:12.925733  567781 command_runner.go:130] >       "size": "68457798",
	I1205 20:01:12.925738  567781 command_runner.go:130] >       "uid": {
	I1205 20:01:12.925744  567781 command_runner.go:130] >         "value": "0"
	I1205 20:01:12.925749  567781 command_runner.go:130] >       },
	I1205 20:01:12.925755  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.925761  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.925779  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.925786  567781 command_runner.go:130] >     },
	I1205 20:01:12.925795  567781 command_runner.go:130] >     {
	I1205 20:01:12.925804  567781 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1205 20:01:12.925812  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.925819  567781 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1205 20:01:12.925825  567781 command_runner.go:130] >       ],
	I1205 20:01:12.925831  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.925843  567781 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1205 20:01:12.925859  567781 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1205 20:01:12.925866  567781 command_runner.go:130] >       ],
	I1205 20:01:12.925872  567781 command_runner.go:130] >       "size": "742080",
	I1205 20:01:12.925879  567781 command_runner.go:130] >       "uid": {
	I1205 20:01:12.925885  567781 command_runner.go:130] >         "value": "65535"
	I1205 20:01:12.925891  567781 command_runner.go:130] >       },
	I1205 20:01:12.925896  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.925900  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.925906  567781 command_runner.go:130] >       "pinned": true
	I1205 20:01:12.925909  567781 command_runner.go:130] >     }
	I1205 20:01:12.925914  567781 command_runner.go:130] >   ]
	I1205 20:01:12.925921  567781 command_runner.go:130] > }
	I1205 20:01:12.926163  567781 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:01:12.926187  567781 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:01:12.926198  567781 kubeadm.go:934] updating node { 192.168.39.170 8443 v1.31.2 crio true true} ...
	I1205 20:01:12.926368  567781 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-346389 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-346389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:01:12.926464  567781 ssh_runner.go:195] Run: crio config
	I1205 20:01:12.970113  567781 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1205 20:01:12.970150  567781 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1205 20:01:12.970162  567781 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1205 20:01:12.970168  567781 command_runner.go:130] > #
	I1205 20:01:12.970179  567781 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1205 20:01:12.970186  567781 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1205 20:01:12.970193  567781 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1205 20:01:12.970200  567781 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1205 20:01:12.970204  567781 command_runner.go:130] > # reload'.
	I1205 20:01:12.970210  567781 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1205 20:01:12.970218  567781 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1205 20:01:12.970229  567781 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1205 20:01:12.970258  567781 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1205 20:01:12.970268  567781 command_runner.go:130] > [crio]
	I1205 20:01:12.970274  567781 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1205 20:01:12.970284  567781 command_runner.go:130] > # containers images, in this directory.
	I1205 20:01:12.970289  567781 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1205 20:01:12.970301  567781 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1205 20:01:12.970306  567781 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1205 20:01:12.970317  567781 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1205 20:01:12.970321  567781 command_runner.go:130] > # imagestore = ""
	I1205 20:01:12.970328  567781 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1205 20:01:12.970335  567781 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1205 20:01:12.970340  567781 command_runner.go:130] > storage_driver = "overlay"
	I1205 20:01:12.970346  567781 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1205 20:01:12.970353  567781 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1205 20:01:12.970357  567781 command_runner.go:130] > storage_option = [
	I1205 20:01:12.970363  567781 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1205 20:01:12.970367  567781 command_runner.go:130] > ]
	I1205 20:01:12.970373  567781 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1205 20:01:12.970386  567781 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1205 20:01:12.970397  567781 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1205 20:01:12.970410  567781 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1205 20:01:12.970420  567781 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1205 20:01:12.970431  567781 command_runner.go:130] > # always happen on a node reboot
	I1205 20:01:12.970438  567781 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1205 20:01:12.970463  567781 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1205 20:01:12.970475  567781 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1205 20:01:12.970484  567781 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1205 20:01:12.970496  567781 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1205 20:01:12.970508  567781 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1205 20:01:12.970524  567781 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1205 20:01:12.970533  567781 command_runner.go:130] > # internal_wipe = true
	I1205 20:01:12.970545  567781 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1205 20:01:12.970557  567781 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1205 20:01:12.970576  567781 command_runner.go:130] > # internal_repair = false
	I1205 20:01:12.970585  567781 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1205 20:01:12.970594  567781 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1205 20:01:12.970604  567781 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1205 20:01:12.970615  567781 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1205 20:01:12.970629  567781 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1205 20:01:12.970639  567781 command_runner.go:130] > [crio.api]
	I1205 20:01:12.970650  567781 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1205 20:01:12.970661  567781 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1205 20:01:12.970672  567781 command_runner.go:130] > # IP address on which the stream server will listen.
	I1205 20:01:12.970682  567781 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1205 20:01:12.970697  567781 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1205 20:01:12.970708  567781 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1205 20:01:12.970718  567781 command_runner.go:130] > # stream_port = "0"
	I1205 20:01:12.970730  567781 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1205 20:01:12.970739  567781 command_runner.go:130] > # stream_enable_tls = false
	I1205 20:01:12.970758  567781 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1205 20:01:12.970770  567781 command_runner.go:130] > # stream_idle_timeout = ""
	I1205 20:01:12.970783  567781 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1205 20:01:12.970797  567781 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1205 20:01:12.970809  567781 command_runner.go:130] > # minutes.
	I1205 20:01:12.970819  567781 command_runner.go:130] > # stream_tls_cert = ""
	I1205 20:01:12.970839  567781 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1205 20:01:12.970853  567781 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1205 20:01:12.970863  567781 command_runner.go:130] > # stream_tls_key = ""
	I1205 20:01:12.970873  567781 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1205 20:01:12.970887  567781 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1205 20:01:12.970914  567781 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1205 20:01:12.970925  567781 command_runner.go:130] > # stream_tls_ca = ""
	I1205 20:01:12.970940  567781 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1205 20:01:12.970952  567781 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1205 20:01:12.970962  567781 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1205 20:01:12.970968  567781 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1205 20:01:12.970985  567781 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1205 20:01:12.970997  567781 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1205 20:01:12.971003  567781 command_runner.go:130] > [crio.runtime]
	I1205 20:01:12.971015  567781 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1205 20:01:12.971024  567781 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1205 20:01:12.971032  567781 command_runner.go:130] > # "nofile=1024:2048"
	I1205 20:01:12.971042  567781 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1205 20:01:12.971051  567781 command_runner.go:130] > # default_ulimits = [
	I1205 20:01:12.971056  567781 command_runner.go:130] > # ]
	I1205 20:01:12.971070  567781 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1205 20:01:12.971081  567781 command_runner.go:130] > # no_pivot = false
	I1205 20:01:12.971089  567781 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1205 20:01:12.971102  567781 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1205 20:01:12.971113  567781 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1205 20:01:12.971123  567781 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1205 20:01:12.971134  567781 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1205 20:01:12.971147  567781 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 20:01:12.971158  567781 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1205 20:01:12.971166  567781 command_runner.go:130] > # Cgroup setting for conmon
	I1205 20:01:12.971180  567781 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1205 20:01:12.971188  567781 command_runner.go:130] > conmon_cgroup = "pod"
	I1205 20:01:12.971196  567781 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1205 20:01:12.971207  567781 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1205 20:01:12.971219  567781 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 20:01:12.971227  567781 command_runner.go:130] > conmon_env = [
	I1205 20:01:12.971236  567781 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1205 20:01:12.971248  567781 command_runner.go:130] > ]
	I1205 20:01:12.971259  567781 command_runner.go:130] > # Additional environment variables to set for all the
	I1205 20:01:12.971268  567781 command_runner.go:130] > # containers. These are overridden if set in the
	I1205 20:01:12.971287  567781 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1205 20:01:12.971298  567781 command_runner.go:130] > # default_env = [
	I1205 20:01:12.971304  567781 command_runner.go:130] > # ]
	I1205 20:01:12.971315  567781 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1205 20:01:12.971334  567781 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1205 20:01:12.971344  567781 command_runner.go:130] > # selinux = false
	I1205 20:01:12.971353  567781 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1205 20:01:12.971364  567781 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1205 20:01:12.971376  567781 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1205 20:01:12.971383  567781 command_runner.go:130] > # seccomp_profile = ""
	I1205 20:01:12.971397  567781 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1205 20:01:12.971406  567781 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1205 20:01:12.971419  567781 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1205 20:01:12.971430  567781 command_runner.go:130] > # which might increase security.
	I1205 20:01:12.971437  567781 command_runner.go:130] > # This option is currently deprecated,
	I1205 20:01:12.971453  567781 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1205 20:01:12.971463  567781 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1205 20:01:12.971474  567781 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1205 20:01:12.971487  567781 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1205 20:01:12.971502  567781 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1205 20:01:12.971514  567781 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1205 20:01:12.971526  567781 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:01:12.971533  567781 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1205 20:01:12.971542  567781 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1205 20:01:12.971550  567781 command_runner.go:130] > # the cgroup blockio controller.
	I1205 20:01:12.971559  567781 command_runner.go:130] > # blockio_config_file = ""
	I1205 20:01:12.971571  567781 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1205 20:01:12.971581  567781 command_runner.go:130] > # blockio parameters.
	I1205 20:01:12.971588  567781 command_runner.go:130] > # blockio_reload = false
	I1205 20:01:12.971609  567781 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1205 20:01:12.971619  567781 command_runner.go:130] > # irqbalance daemon.
	I1205 20:01:12.971628  567781 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1205 20:01:12.971640  567781 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1205 20:01:12.971653  567781 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1205 20:01:12.971666  567781 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1205 20:01:12.971683  567781 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1205 20:01:12.971697  567781 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1205 20:01:12.971716  567781 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:01:12.971728  567781 command_runner.go:130] > # rdt_config_file = ""
	I1205 20:01:12.971737  567781 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1205 20:01:12.971745  567781 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1205 20:01:12.971769  567781 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1205 20:01:12.971779  567781 command_runner.go:130] > # separate_pull_cgroup = ""
	I1205 20:01:12.971786  567781 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1205 20:01:12.971798  567781 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1205 20:01:12.971807  567781 command_runner.go:130] > # will be added.
	I1205 20:01:12.971814  567781 command_runner.go:130] > # default_capabilities = [
	I1205 20:01:12.971824  567781 command_runner.go:130] > # 	"CHOWN",
	I1205 20:01:12.971829  567781 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1205 20:01:12.971836  567781 command_runner.go:130] > # 	"FSETID",
	I1205 20:01:12.971845  567781 command_runner.go:130] > # 	"FOWNER",
	I1205 20:01:12.971852  567781 command_runner.go:130] > # 	"SETGID",
	I1205 20:01:12.971861  567781 command_runner.go:130] > # 	"SETUID",
	I1205 20:01:12.971866  567781 command_runner.go:130] > # 	"SETPCAP",
	I1205 20:01:12.971874  567781 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1205 20:01:12.971877  567781 command_runner.go:130] > # 	"KILL",
	I1205 20:01:12.971883  567781 command_runner.go:130] > # ]
	I1205 20:01:12.971897  567781 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1205 20:01:12.971911  567781 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1205 20:01:12.971922  567781 command_runner.go:130] > # add_inheritable_capabilities = false
	I1205 20:01:12.971931  567781 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1205 20:01:12.971944  567781 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 20:01:12.971951  567781 command_runner.go:130] > default_sysctls = [
	I1205 20:01:12.971962  567781 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1205 20:01:12.971970  567781 command_runner.go:130] > ]
	I1205 20:01:12.971979  567781 command_runner.go:130] > # List of devices on the host that a
	I1205 20:01:12.971991  567781 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1205 20:01:12.972001  567781 command_runner.go:130] > # allowed_devices = [
	I1205 20:01:12.972008  567781 command_runner.go:130] > # 	"/dev/fuse",
	I1205 20:01:12.972017  567781 command_runner.go:130] > # ]
	I1205 20:01:12.972027  567781 command_runner.go:130] > # List of additional devices. specified as
	I1205 20:01:12.972041  567781 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1205 20:01:12.972050  567781 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1205 20:01:12.972060  567781 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 20:01:12.972070  567781 command_runner.go:130] > # additional_devices = [
	I1205 20:01:12.972076  567781 command_runner.go:130] > # ]
	I1205 20:01:12.972088  567781 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1205 20:01:12.972100  567781 command_runner.go:130] > # cdi_spec_dirs = [
	I1205 20:01:12.972107  567781 command_runner.go:130] > # 	"/etc/cdi",
	I1205 20:01:12.972117  567781 command_runner.go:130] > # 	"/var/run/cdi",
	I1205 20:01:12.972122  567781 command_runner.go:130] > # ]
	I1205 20:01:12.972141  567781 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1205 20:01:12.972155  567781 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1205 20:01:12.972166  567781 command_runner.go:130] > # Defaults to false.
	I1205 20:01:12.972174  567781 command_runner.go:130] > # device_ownership_from_security_context = false
	I1205 20:01:12.972187  567781 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1205 20:01:12.972199  567781 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1205 20:01:12.972206  567781 command_runner.go:130] > # hooks_dir = [
	I1205 20:01:12.972215  567781 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1205 20:01:12.972219  567781 command_runner.go:130] > # ]
	I1205 20:01:12.972228  567781 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1205 20:01:12.972242  567781 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1205 20:01:12.972253  567781 command_runner.go:130] > # its default mounts from the following two files:
	I1205 20:01:12.972258  567781 command_runner.go:130] > #
	I1205 20:01:12.972290  567781 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1205 20:01:12.972305  567781 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1205 20:01:12.972314  567781 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1205 20:01:12.972323  567781 command_runner.go:130] > #
	I1205 20:01:12.972333  567781 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1205 20:01:12.972345  567781 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1205 20:01:12.972359  567781 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1205 20:01:12.972371  567781 command_runner.go:130] > #      only add mounts it finds in this file.
	I1205 20:01:12.972379  567781 command_runner.go:130] > #
	I1205 20:01:12.972389  567781 command_runner.go:130] > # default_mounts_file = ""
	I1205 20:01:12.972400  567781 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1205 20:01:12.972415  567781 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1205 20:01:12.972422  567781 command_runner.go:130] > pids_limit = 1024
	I1205 20:01:12.972436  567781 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1205 20:01:12.972454  567781 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1205 20:01:12.972467  567781 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1205 20:01:12.972483  567781 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1205 20:01:12.972492  567781 command_runner.go:130] > # log_size_max = -1
	I1205 20:01:12.972504  567781 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1205 20:01:12.972515  567781 command_runner.go:130] > # log_to_journald = false
	I1205 20:01:12.972525  567781 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1205 20:01:12.972533  567781 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1205 20:01:12.972549  567781 command_runner.go:130] > # Path to directory for container attach sockets.
	I1205 20:01:12.972561  567781 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1205 20:01:12.972570  567781 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1205 20:01:12.972579  567781 command_runner.go:130] > # bind_mount_prefix = ""
	I1205 20:01:12.972588  567781 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1205 20:01:12.972596  567781 command_runner.go:130] > # read_only = false
	I1205 20:01:12.972602  567781 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1205 20:01:12.972615  567781 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1205 20:01:12.972625  567781 command_runner.go:130] > # live configuration reload.
	I1205 20:01:12.972632  567781 command_runner.go:130] > # log_level = "info"
	I1205 20:01:12.972643  567781 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1205 20:01:12.972654  567781 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:01:12.972661  567781 command_runner.go:130] > # log_filter = ""
	I1205 20:01:12.972693  567781 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1205 20:01:12.972714  567781 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1205 20:01:12.972724  567781 command_runner.go:130] > # separated by comma.
	I1205 20:01:12.972740  567781 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 20:01:12.972750  567781 command_runner.go:130] > # uid_mappings = ""
	I1205 20:01:12.972759  567781 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1205 20:01:12.972772  567781 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1205 20:01:12.972788  567781 command_runner.go:130] > # separated by comma.
	I1205 20:01:12.972800  567781 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 20:01:12.972811  567781 command_runner.go:130] > # gid_mappings = ""
	I1205 20:01:12.972821  567781 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1205 20:01:12.972835  567781 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 20:01:12.972848  567781 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 20:01:12.972863  567781 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 20:01:12.972874  567781 command_runner.go:130] > # minimum_mappable_uid = -1
	I1205 20:01:12.972882  567781 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1205 20:01:12.972891  567781 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 20:01:12.972901  567781 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 20:01:12.972918  567781 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 20:01:12.972925  567781 command_runner.go:130] > # minimum_mappable_gid = -1
	I1205 20:01:12.972939  567781 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1205 20:01:12.972951  567781 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1205 20:01:12.972963  567781 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1205 20:01:12.972978  567781 command_runner.go:130] > # ctr_stop_timeout = 30
	I1205 20:01:12.972987  567781 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1205 20:01:12.972995  567781 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1205 20:01:12.973006  567781 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1205 20:01:12.973014  567781 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1205 20:01:12.973024  567781 command_runner.go:130] > drop_infra_ctr = false
	I1205 20:01:12.973033  567781 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1205 20:01:12.973045  567781 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1205 20:01:12.973061  567781 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1205 20:01:12.973071  567781 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1205 20:01:12.973083  567781 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1205 20:01:12.973096  567781 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1205 20:01:12.973106  567781 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1205 20:01:12.973118  567781 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1205 20:01:12.973125  567781 command_runner.go:130] > # shared_cpuset = ""
	I1205 20:01:12.973138  567781 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1205 20:01:12.973149  567781 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1205 20:01:12.973157  567781 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1205 20:01:12.973170  567781 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1205 20:01:12.973181  567781 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1205 20:01:12.973196  567781 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1205 20:01:12.973206  567781 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1205 20:01:12.973217  567781 command_runner.go:130] > # enable_criu_support = false
	I1205 20:01:12.973225  567781 command_runner.go:130] > # Enable/disable the generation of the container,
	I1205 20:01:12.973241  567781 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1205 20:01:12.973252  567781 command_runner.go:130] > # enable_pod_events = false
	I1205 20:01:12.973259  567781 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1205 20:01:12.973269  567781 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1205 20:01:12.973287  567781 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1205 20:01:12.973297  567781 command_runner.go:130] > # default_runtime = "runc"
	I1205 20:01:12.973306  567781 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1205 20:01:12.973320  567781 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1205 20:01:12.973335  567781 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1205 20:01:12.973346  567781 command_runner.go:130] > # creation as a file is not desired either.
	I1205 20:01:12.973359  567781 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1205 20:01:12.973374  567781 command_runner.go:130] > # the hostname is being managed dynamically.
	I1205 20:01:12.973384  567781 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1205 20:01:12.973388  567781 command_runner.go:130] > # ]
	I1205 20:01:12.973399  567781 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1205 20:01:12.973412  567781 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1205 20:01:12.973425  567781 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1205 20:01:12.973437  567781 command_runner.go:130] > # Each entry in the table should follow the format:
	I1205 20:01:12.973446  567781 command_runner.go:130] > #
	I1205 20:01:12.973454  567781 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1205 20:01:12.973464  567781 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1205 20:01:12.973490  567781 command_runner.go:130] > # runtime_type = "oci"
	I1205 20:01:12.973500  567781 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1205 20:01:12.973506  567781 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1205 20:01:12.973512  567781 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1205 20:01:12.973524  567781 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1205 20:01:12.973536  567781 command_runner.go:130] > # monitor_env = []
	I1205 20:01:12.973547  567781 command_runner.go:130] > # privileged_without_host_devices = false
	I1205 20:01:12.973554  567781 command_runner.go:130] > # allowed_annotations = []
	I1205 20:01:12.973566  567781 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1205 20:01:12.973575  567781 command_runner.go:130] > # Where:
	I1205 20:01:12.973583  567781 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1205 20:01:12.973598  567781 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1205 20:01:12.973612  567781 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1205 20:01:12.973622  567781 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1205 20:01:12.973632  567781 command_runner.go:130] > #   in $PATH.
	I1205 20:01:12.973642  567781 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1205 20:01:12.973652  567781 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1205 20:01:12.973662  567781 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1205 20:01:12.973669  567781 command_runner.go:130] > #   state.
	I1205 20:01:12.973677  567781 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1205 20:01:12.973691  567781 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1205 20:01:12.973705  567781 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1205 20:01:12.973713  567781 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1205 20:01:12.973726  567781 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1205 20:01:12.973739  567781 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1205 20:01:12.973750  567781 command_runner.go:130] > #   The currently recognized values are:
	I1205 20:01:12.973757  567781 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1205 20:01:12.973770  567781 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1205 20:01:12.973789  567781 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1205 20:01:12.973801  567781 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1205 20:01:12.973813  567781 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1205 20:01:12.973825  567781 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1205 20:01:12.973838  567781 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1205 20:01:12.973847  567781 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1205 20:01:12.973858  567781 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1205 20:01:12.973872  567781 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1205 20:01:12.973882  567781 command_runner.go:130] > #   deprecated option "conmon".
	I1205 20:01:12.973893  567781 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1205 20:01:12.973906  567781 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1205 20:01:12.973917  567781 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1205 20:01:12.973926  567781 command_runner.go:130] > #   should be moved to the container's cgroup
	I1205 20:01:12.973933  567781 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1205 20:01:12.973944  567781 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1205 20:01:12.973959  567781 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1205 20:01:12.973972  567781 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1205 20:01:12.973980  567781 command_runner.go:130] > #
	I1205 20:01:12.973988  567781 command_runner.go:130] > # Using the seccomp notifier feature:
	I1205 20:01:12.973995  567781 command_runner.go:130] > #
	I1205 20:01:12.974005  567781 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1205 20:01:12.974017  567781 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1205 20:01:12.974021  567781 command_runner.go:130] > #
	I1205 20:01:12.974035  567781 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1205 20:01:12.974049  567781 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1205 20:01:12.974057  567781 command_runner.go:130] > #
	I1205 20:01:12.974066  567781 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1205 20:01:12.974075  567781 command_runner.go:130] > # feature.
	I1205 20:01:12.974081  567781 command_runner.go:130] > #
	I1205 20:01:12.974094  567781 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1205 20:01:12.974102  567781 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1205 20:01:12.974113  567781 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1205 20:01:12.974127  567781 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1205 20:01:12.974140  567781 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1205 20:01:12.974148  567781 command_runner.go:130] > #
	I1205 20:01:12.974157  567781 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1205 20:01:12.974173  567781 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1205 20:01:12.974181  567781 command_runner.go:130] > #
	I1205 20:01:12.974188  567781 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1205 20:01:12.974194  567781 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1205 20:01:12.974202  567781 command_runner.go:130] > #
	I1205 20:01:12.974213  567781 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1205 20:01:12.974226  567781 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1205 20:01:12.974237  567781 command_runner.go:130] > # limitation.
	I1205 20:01:12.974248  567781 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1205 20:01:12.974257  567781 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1205 20:01:12.974266  567781 command_runner.go:130] > runtime_type = "oci"
	I1205 20:01:12.974272  567781 command_runner.go:130] > runtime_root = "/run/runc"
	I1205 20:01:12.974281  567781 command_runner.go:130] > runtime_config_path = ""
	I1205 20:01:12.974292  567781 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1205 20:01:12.974302  567781 command_runner.go:130] > monitor_cgroup = "pod"
	I1205 20:01:12.974312  567781 command_runner.go:130] > monitor_exec_cgroup = ""
	I1205 20:01:12.974319  567781 command_runner.go:130] > monitor_env = [
	I1205 20:01:12.974327  567781 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1205 20:01:12.974336  567781 command_runner.go:130] > ]
	I1205 20:01:12.974344  567781 command_runner.go:130] > privileged_without_host_devices = false
	I1205 20:01:12.974356  567781 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1205 20:01:12.974362  567781 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1205 20:01:12.974368  567781 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1205 20:01:12.974376  567781 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1205 20:01:12.974385  567781 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1205 20:01:12.974395  567781 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1205 20:01:12.974413  567781 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1205 20:01:12.974427  567781 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1205 20:01:12.974440  567781 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1205 20:01:12.974453  567781 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1205 20:01:12.974461  567781 command_runner.go:130] > # Example:
	I1205 20:01:12.974466  567781 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1205 20:01:12.974471  567781 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1205 20:01:12.974476  567781 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1205 20:01:12.974480  567781 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1205 20:01:12.974485  567781 command_runner.go:130] > # cpuset = 0
	I1205 20:01:12.974489  567781 command_runner.go:130] > # cpushares = "0-1"
	I1205 20:01:12.974492  567781 command_runner.go:130] > # Where:
	I1205 20:01:12.974500  567781 command_runner.go:130] > # The workload name is workload-type.
	I1205 20:01:12.974506  567781 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1205 20:01:12.974512  567781 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1205 20:01:12.974518  567781 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1205 20:01:12.974525  567781 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1205 20:01:12.974530  567781 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1205 20:01:12.974537  567781 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1205 20:01:12.974547  567781 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1205 20:01:12.974555  567781 command_runner.go:130] > # Default value is set to true
	I1205 20:01:12.974562  567781 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1205 20:01:12.974571  567781 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1205 20:01:12.974579  567781 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1205 20:01:12.974586  567781 command_runner.go:130] > # Default value is set to 'false'
	I1205 20:01:12.974593  567781 command_runner.go:130] > # disable_hostport_mapping = false
	I1205 20:01:12.974602  567781 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1205 20:01:12.974607  567781 command_runner.go:130] > #
	I1205 20:01:12.974614  567781 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1205 20:01:12.974620  567781 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1205 20:01:12.974625  567781 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1205 20:01:12.974631  567781 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1205 20:01:12.974636  567781 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1205 20:01:12.974639  567781 command_runner.go:130] > [crio.image]
	I1205 20:01:12.974644  567781 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1205 20:01:12.974650  567781 command_runner.go:130] > # default_transport = "docker://"
	I1205 20:01:12.974656  567781 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1205 20:01:12.974662  567781 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1205 20:01:12.974665  567781 command_runner.go:130] > # global_auth_file = ""
	I1205 20:01:12.974671  567781 command_runner.go:130] > # The image used to instantiate infra containers.
	I1205 20:01:12.974679  567781 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:01:12.974683  567781 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1205 20:01:12.974691  567781 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1205 20:01:12.974696  567781 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1205 20:01:12.974702  567781 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:01:12.974706  567781 command_runner.go:130] > # pause_image_auth_file = ""
	I1205 20:01:12.974711  567781 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1205 20:01:12.974721  567781 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1205 20:01:12.974730  567781 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1205 20:01:12.974738  567781 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1205 20:01:12.974742  567781 command_runner.go:130] > # pause_command = "/pause"
	I1205 20:01:12.974750  567781 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1205 20:01:12.974756  567781 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1205 20:01:12.974767  567781 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1205 20:01:12.974780  567781 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1205 20:01:12.974790  567781 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1205 20:01:12.974796  567781 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1205 20:01:12.974802  567781 command_runner.go:130] > # pinned_images = [
	I1205 20:01:12.974806  567781 command_runner.go:130] > # ]
	I1205 20:01:12.974812  567781 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1205 20:01:12.974820  567781 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1205 20:01:12.974826  567781 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1205 20:01:12.974834  567781 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1205 20:01:12.974839  567781 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1205 20:01:12.974843  567781 command_runner.go:130] > # signature_policy = ""
	I1205 20:01:12.974848  567781 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1205 20:01:12.974855  567781 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1205 20:01:12.974863  567781 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1205 20:01:12.974869  567781 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1205 20:01:12.974878  567781 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1205 20:01:12.974883  567781 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1205 20:01:12.974891  567781 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1205 20:01:12.974896  567781 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1205 20:01:12.974903  567781 command_runner.go:130] > # changing them here.
	I1205 20:01:12.974907  567781 command_runner.go:130] > # insecure_registries = [
	I1205 20:01:12.974910  567781 command_runner.go:130] > # ]
	I1205 20:01:12.974916  567781 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1205 20:01:12.974923  567781 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1205 20:01:12.974927  567781 command_runner.go:130] > # image_volumes = "mkdir"
	I1205 20:01:12.974932  567781 command_runner.go:130] > # Temporary directory to use for storing big files
	I1205 20:01:12.974940  567781 command_runner.go:130] > # big_files_temporary_dir = ""
	I1205 20:01:12.974945  567781 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1205 20:01:12.974952  567781 command_runner.go:130] > # CNI plugins.
	I1205 20:01:12.974955  567781 command_runner.go:130] > [crio.network]
	I1205 20:01:12.974960  567781 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1205 20:01:12.974969  567781 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1205 20:01:12.974976  567781 command_runner.go:130] > # cni_default_network = ""
	I1205 20:01:12.974981  567781 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1205 20:01:12.974988  567781 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1205 20:01:12.974993  567781 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1205 20:01:12.975003  567781 command_runner.go:130] > # plugin_dirs = [
	I1205 20:01:12.975006  567781 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1205 20:01:12.975010  567781 command_runner.go:130] > # ]
	I1205 20:01:12.975015  567781 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1205 20:01:12.975022  567781 command_runner.go:130] > [crio.metrics]
	I1205 20:01:12.975027  567781 command_runner.go:130] > # Globally enable or disable metrics support.
	I1205 20:01:12.975034  567781 command_runner.go:130] > enable_metrics = true
	I1205 20:01:12.975039  567781 command_runner.go:130] > # Specify enabled metrics collectors.
	I1205 20:01:12.975046  567781 command_runner.go:130] > # Per default all metrics are enabled.
	I1205 20:01:12.975052  567781 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1205 20:01:12.975060  567781 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1205 20:01:12.975065  567781 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1205 20:01:12.975071  567781 command_runner.go:130] > # metrics_collectors = [
	I1205 20:01:12.975075  567781 command_runner.go:130] > # 	"operations",
	I1205 20:01:12.975080  567781 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1205 20:01:12.975084  567781 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1205 20:01:12.975088  567781 command_runner.go:130] > # 	"operations_errors",
	I1205 20:01:12.975092  567781 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1205 20:01:12.975097  567781 command_runner.go:130] > # 	"image_pulls_by_name",
	I1205 20:01:12.975101  567781 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1205 20:01:12.975106  567781 command_runner.go:130] > # 	"image_pulls_failures",
	I1205 20:01:12.975111  567781 command_runner.go:130] > # 	"image_pulls_successes",
	I1205 20:01:12.975118  567781 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1205 20:01:12.975122  567781 command_runner.go:130] > # 	"image_layer_reuse",
	I1205 20:01:12.975129  567781 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1205 20:01:12.975133  567781 command_runner.go:130] > # 	"containers_oom_total",
	I1205 20:01:12.975139  567781 command_runner.go:130] > # 	"containers_oom",
	I1205 20:01:12.975143  567781 command_runner.go:130] > # 	"processes_defunct",
	I1205 20:01:12.975147  567781 command_runner.go:130] > # 	"operations_total",
	I1205 20:01:12.975154  567781 command_runner.go:130] > # 	"operations_latency_seconds",
	I1205 20:01:12.975167  567781 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1205 20:01:12.975174  567781 command_runner.go:130] > # 	"operations_errors_total",
	I1205 20:01:12.975181  567781 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1205 20:01:12.975185  567781 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1205 20:01:12.975191  567781 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1205 20:01:12.975197  567781 command_runner.go:130] > # 	"image_pulls_success_total",
	I1205 20:01:12.975203  567781 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1205 20:01:12.975207  567781 command_runner.go:130] > # 	"containers_oom_count_total",
	I1205 20:01:12.975214  567781 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1205 20:01:12.975219  567781 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1205 20:01:12.975227  567781 command_runner.go:130] > # ]
	I1205 20:01:12.975232  567781 command_runner.go:130] > # The port on which the metrics server will listen.
	I1205 20:01:12.975238  567781 command_runner.go:130] > # metrics_port = 9090
	I1205 20:01:12.975243  567781 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1205 20:01:12.975249  567781 command_runner.go:130] > # metrics_socket = ""
	I1205 20:01:12.975254  567781 command_runner.go:130] > # The certificate for the secure metrics server.
	I1205 20:01:12.975259  567781 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1205 20:01:12.975266  567781 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1205 20:01:12.975270  567781 command_runner.go:130] > # certificate on any modification event.
	I1205 20:01:12.975282  567781 command_runner.go:130] > # metrics_cert = ""
	I1205 20:01:12.975290  567781 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1205 20:01:12.975295  567781 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1205 20:01:12.975302  567781 command_runner.go:130] > # metrics_key = ""
	I1205 20:01:12.975307  567781 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1205 20:01:12.975313  567781 command_runner.go:130] > [crio.tracing]
	I1205 20:01:12.975318  567781 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1205 20:01:12.975323  567781 command_runner.go:130] > # enable_tracing = false
	I1205 20:01:12.975332  567781 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1205 20:01:12.975336  567781 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1205 20:01:12.975343  567781 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1205 20:01:12.975350  567781 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1205 20:01:12.975354  567781 command_runner.go:130] > # CRI-O NRI configuration.
	I1205 20:01:12.975358  567781 command_runner.go:130] > [crio.nri]
	I1205 20:01:12.975363  567781 command_runner.go:130] > # Globally enable or disable NRI.
	I1205 20:01:12.975369  567781 command_runner.go:130] > # enable_nri = false
	I1205 20:01:12.975374  567781 command_runner.go:130] > # NRI socket to listen on.
	I1205 20:01:12.975380  567781 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1205 20:01:12.975384  567781 command_runner.go:130] > # NRI plugin directory to use.
	I1205 20:01:12.975389  567781 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1205 20:01:12.975394  567781 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1205 20:01:12.975401  567781 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1205 20:01:12.975406  567781 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1205 20:01:12.975412  567781 command_runner.go:130] > # nri_disable_connections = false
	I1205 20:01:12.975417  567781 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1205 20:01:12.975424  567781 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1205 20:01:12.975428  567781 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1205 20:01:12.975433  567781 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1205 20:01:12.975438  567781 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1205 20:01:12.975444  567781 command_runner.go:130] > [crio.stats]
	I1205 20:01:12.975450  567781 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1205 20:01:12.975455  567781 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1205 20:01:12.975460  567781 command_runner.go:130] > # stats_collection_period = 0
	I1205 20:01:12.975641  567781 command_runner.go:130] ! time="2024-12-05 20:01:12.938494287Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1205 20:01:12.975671  567781 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1205 20:01:12.975800  567781 cni.go:84] Creating CNI manager for ""
	I1205 20:01:12.975811  567781 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1205 20:01:12.975821  567781 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:01:12.975846  567781 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.170 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-346389 NodeName:multinode-346389 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:01:12.976001  567781 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.170
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-346389"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.170"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.170"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:01:12.976078  567781 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:01:12.986319  567781 command_runner.go:130] > kubeadm
	I1205 20:01:12.986344  567781 command_runner.go:130] > kubectl
	I1205 20:01:12.986351  567781 command_runner.go:130] > kubelet
	I1205 20:01:12.986493  567781 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:01:12.986558  567781 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:01:12.996649  567781 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1205 20:01:13.016182  567781 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:01:13.035572  567781 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2296 bytes)
	I1205 20:01:13.055929  567781 ssh_runner.go:195] Run: grep 192.168.39.170	control-plane.minikube.internal$ /etc/hosts
	I1205 20:01:13.060107  567781 command_runner.go:130] > 192.168.39.170	control-plane.minikube.internal
	I1205 20:01:13.060190  567781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:01:13.204330  567781 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:01:13.219375  567781 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/multinode-346389 for IP: 192.168.39.170
	I1205 20:01:13.219404  567781 certs.go:194] generating shared ca certs ...
	I1205 20:01:13.219425  567781 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:01:13.219672  567781 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:01:13.219721  567781 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:01:13.219736  567781 certs.go:256] generating profile certs ...
	I1205 20:01:13.219845  567781 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/multinode-346389/client.key
	I1205 20:01:13.219936  567781 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/multinode-346389/apiserver.key.a99a356c
	I1205 20:01:13.219995  567781 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/multinode-346389/proxy-client.key
	I1205 20:01:13.220011  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 20:01:13.220030  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 20:01:13.220059  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 20:01:13.220076  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 20:01:13.220093  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/multinode-346389/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 20:01:13.220112  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/multinode-346389/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 20:01:13.220131  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/multinode-346389/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 20:01:13.220153  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/multinode-346389/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 20:01:13.220233  567781 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:01:13.220308  567781 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:01:13.220323  567781 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:01:13.220360  567781 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:01:13.220395  567781 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:01:13.220427  567781 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:01:13.220481  567781 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:01:13.220524  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /usr/share/ca-certificates/5381862.pem
	I1205 20:01:13.220544  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:01:13.220562  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem -> /usr/share/ca-certificates/538186.pem
	I1205 20:01:13.221193  567781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:01:13.246407  567781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:01:13.270522  567781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:01:13.296740  567781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:01:13.321632  567781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/multinode-346389/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 20:01:13.346268  567781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/multinode-346389/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:01:13.370792  567781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/multinode-346389/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:01:13.396591  567781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/multinode-346389/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:01:13.544785  567781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:01:13.646833  567781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:01:13.807609  567781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:01:13.935968  567781 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:01:14.003409  567781 ssh_runner.go:195] Run: openssl version
	I1205 20:01:14.023379  567781 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1205 20:01:14.023465  567781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:01:14.051019  567781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:01:14.057942  567781 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:01:14.058417  567781 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:01:14.058486  567781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:01:14.069192  567781 command_runner.go:130] > 3ec20f2e
	I1205 20:01:14.069278  567781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:01:14.082196  567781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:01:14.098540  567781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:01:14.107979  567781 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:01:14.108170  567781 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:01:14.108224  567781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:01:14.113820  567781 command_runner.go:130] > b5213941
	I1205 20:01:14.114142  567781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:01:14.123859  567781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:01:14.134928  567781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:01:14.139577  567781 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:01:14.139616  567781 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:01:14.139665  567781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:01:14.145926  567781 command_runner.go:130] > 51391683
	I1205 20:01:14.146059  567781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:01:14.155560  567781 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:01:14.160200  567781 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:01:14.160229  567781 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1205 20:01:14.160238  567781 command_runner.go:130] > Device: 253,1	Inode: 3150382     Links: 1
	I1205 20:01:14.160248  567781 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 20:01:14.160262  567781 command_runner.go:130] > Access: 2024-12-05 19:54:21.782970096 +0000
	I1205 20:01:14.160287  567781 command_runner.go:130] > Modify: 2024-12-05 19:54:21.782970096 +0000
	I1205 20:01:14.160303  567781 command_runner.go:130] > Change: 2024-12-05 19:54:21.782970096 +0000
	I1205 20:01:14.160310  567781 command_runner.go:130] >  Birth: 2024-12-05 19:54:21.782970096 +0000
	I1205 20:01:14.160355  567781 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:01:14.165997  567781 command_runner.go:130] > Certificate will not expire
	I1205 20:01:14.166326  567781 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:01:14.171890  567781 command_runner.go:130] > Certificate will not expire
	I1205 20:01:14.171968  567781 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:01:14.177472  567781 command_runner.go:130] > Certificate will not expire
	I1205 20:01:14.177558  567781 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:01:14.183215  567781 command_runner.go:130] > Certificate will not expire
	I1205 20:01:14.183288  567781 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:01:14.188896  567781 command_runner.go:130] > Certificate will not expire
	I1205 20:01:14.189185  567781 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:01:14.195039  567781 command_runner.go:130] > Certificate will not expire
	I1205 20:01:14.195129  567781 kubeadm.go:392] StartCluster: {Name:multinode-346389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-346389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.125 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:01:14.195286  567781 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:01:14.195349  567781 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:01:14.232179  567781 command_runner.go:130] > c5fd89a18bbaa2140d0b8094df5b9d8dd429cd9b3bdaed26f464b78c576e7189
	I1205 20:01:14.232206  567781 command_runner.go:130] > 2e630b6753b94f0df8dd67b2fcc1bfbb5f7761f3bc9ab72dbd462e14a8378dc4
	I1205 20:01:14.232212  567781 command_runner.go:130] > ce5efd3787e17ba8738a5427baf7eea1ee5b7a8f938bb47c7abed371e5a603f7
	I1205 20:01:14.232219  567781 command_runner.go:130] > a666ed3405a3e13cf20fbd8dbac45816e904954b4ebc68fb8a1b80fd282284c8
	I1205 20:01:14.232224  567781 command_runner.go:130] > c897d3cc7ee00c86db7cd1e6bbc9eb3ea765742ebfa242a0ce8cce78952a7dde
	I1205 20:01:14.232229  567781 command_runner.go:130] > 86a636d2da85280e9a07cfc40c5efed5746d692941b52ada3d03aaf858d8a23c
	I1205 20:01:14.232235  567781 command_runner.go:130] > 8653657853de98aba7582b8f54f8e70b9afd24b32764929281d4e662609b8d11
	I1205 20:01:14.232248  567781 command_runner.go:130] > 6163a5b6d362dde00b1ce847200a6ca36c7b3c15cf8f30ebe3efe3a224b3fe1a
	I1205 20:01:14.232289  567781 cri.go:89] found id: "c5fd89a18bbaa2140d0b8094df5b9d8dd429cd9b3bdaed26f464b78c576e7189"
	I1205 20:01:14.232302  567781 cri.go:89] found id: "2e630b6753b94f0df8dd67b2fcc1bfbb5f7761f3bc9ab72dbd462e14a8378dc4"
	I1205 20:01:14.232309  567781 cri.go:89] found id: "ce5efd3787e17ba8738a5427baf7eea1ee5b7a8f938bb47c7abed371e5a603f7"
	I1205 20:01:14.232324  567781 cri.go:89] found id: "a666ed3405a3e13cf20fbd8dbac45816e904954b4ebc68fb8a1b80fd282284c8"
	I1205 20:01:14.232333  567781 cri.go:89] found id: "c897d3cc7ee00c86db7cd1e6bbc9eb3ea765742ebfa242a0ce8cce78952a7dde"
	I1205 20:01:14.232338  567781 cri.go:89] found id: "86a636d2da85280e9a07cfc40c5efed5746d692941b52ada3d03aaf858d8a23c"
	I1205 20:01:14.232346  567781 cri.go:89] found id: "8653657853de98aba7582b8f54f8e70b9afd24b32764929281d4e662609b8d11"
	I1205 20:01:14.232351  567781 cri.go:89] found id: "6163a5b6d362dde00b1ce847200a6ca36c7b3c15cf8f30ebe3efe3a224b3fe1a"
	I1205 20:01:14.232358  567781 cri.go:89] found id: ""
	I1205 20:01:14.232405  567781 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-346389 -n multinode-346389
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-346389 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (327.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (145.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 stop
E1205 20:03:15.014425  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-346389 stop: exit status 82 (2m0.486568143s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-346389-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-346389 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-346389 status: (18.854933667s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-346389 status --alsologtostderr: (3.392304921s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-346389 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-346389 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-346389 -n multinode-346389
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-346389 logs -n 25: (2.274091196s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-346389 ssh -n                                                                 | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | multinode-346389-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-346389 cp multinode-346389-m02:/home/docker/cp-test.txt                       | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | multinode-346389:/home/docker/cp-test_multinode-346389-m02_multinode-346389.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-346389 ssh -n                                                                 | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | multinode-346389-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-346389 ssh -n multinode-346389 sudo cat                                       | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | /home/docker/cp-test_multinode-346389-m02_multinode-346389.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-346389 cp multinode-346389-m02:/home/docker/cp-test.txt                       | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | multinode-346389-m03:/home/docker/cp-test_multinode-346389-m02_multinode-346389-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-346389 ssh -n                                                                 | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | multinode-346389-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-346389 ssh -n multinode-346389-m03 sudo cat                                   | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | /home/docker/cp-test_multinode-346389-m02_multinode-346389-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-346389 cp testdata/cp-test.txt                                                | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | multinode-346389-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-346389 ssh -n                                                                 | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | multinode-346389-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-346389 cp multinode-346389-m03:/home/docker/cp-test.txt                       | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile122835969/001/cp-test_multinode-346389-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-346389 ssh -n                                                                 | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | multinode-346389-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-346389 cp multinode-346389-m03:/home/docker/cp-test.txt                       | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | multinode-346389:/home/docker/cp-test_multinode-346389-m03_multinode-346389.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-346389 ssh -n                                                                 | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | multinode-346389-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-346389 ssh -n multinode-346389 sudo cat                                       | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | /home/docker/cp-test_multinode-346389-m03_multinode-346389.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-346389 cp multinode-346389-m03:/home/docker/cp-test.txt                       | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | multinode-346389-m02:/home/docker/cp-test_multinode-346389-m03_multinode-346389-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-346389 ssh -n                                                                 | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | multinode-346389-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-346389 ssh -n multinode-346389-m02 sudo cat                                   | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | /home/docker/cp-test_multinode-346389-m03_multinode-346389-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-346389 node stop m03                                                          | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	| node    | multinode-346389 node start                                                             | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:57 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-346389                                                                | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:57 UTC |                     |
	| stop    | -p multinode-346389                                                                     | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:57 UTC |                     |
	| start   | -p multinode-346389                                                                     | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 19:59 UTC | 05 Dec 24 20:03 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-346389                                                                | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 20:03 UTC |                     |
	| node    | multinode-346389 node delete                                                            | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 20:03 UTC | 05 Dec 24 20:03 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-346389 stop                                                                   | multinode-346389 | jenkins | v1.34.0 | 05 Dec 24 20:03 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 19:59:39
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:59:39.399073  567781 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:59:39.399210  567781 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:59:39.399220  567781 out.go:358] Setting ErrFile to fd 2...
	I1205 19:59:39.399224  567781 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:59:39.399433  567781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 19:59:39.399971  567781 out.go:352] Setting JSON to false
	I1205 19:59:39.401052  567781 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":9725,"bootTime":1733419054,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:59:39.401124  567781 start.go:139] virtualization: kvm guest
	I1205 19:59:39.403691  567781 out.go:177] * [multinode-346389] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:59:39.405116  567781 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 19:59:39.405168  567781 notify.go:220] Checking for updates...
	I1205 19:59:39.407682  567781 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:59:39.409030  567781 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:59:39.410280  567781 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:59:39.411606  567781 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:59:39.413317  567781 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:59:39.415155  567781 config.go:182] Loaded profile config "multinode-346389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:59:39.415318  567781 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 19:59:39.415963  567781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:59:39.416037  567781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:59:39.432351  567781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36449
	I1205 19:59:39.433058  567781 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:59:39.433740  567781 main.go:141] libmachine: Using API Version  1
	I1205 19:59:39.433764  567781 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:59:39.434314  567781 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:59:39.434547  567781 main.go:141] libmachine: (multinode-346389) Calling .DriverName
	I1205 19:59:39.471624  567781 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 19:59:39.473034  567781 start.go:297] selected driver: kvm2
	I1205 19:59:39.473050  567781 start.go:901] validating driver "kvm2" against &{Name:multinode-346389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-346389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.125 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:59:39.473245  567781 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:59:39.473708  567781 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:59:39.473824  567781 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20052-530897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 19:59:39.490838  567781 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 19:59:39.491573  567781 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:59:39.491609  567781 cni.go:84] Creating CNI manager for ""
	I1205 19:59:39.491670  567781 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1205 19:59:39.491742  567781 start.go:340] cluster config:
	{Name:multinode-346389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-346389 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.125 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provision
er:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:59:39.491883  567781 iso.go:125] acquiring lock: {Name:mk778929df466edaca8cb6d38427acedfae32b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:59:39.494853  567781 out.go:177] * Starting "multinode-346389" primary control-plane node in "multinode-346389" cluster
	I1205 19:59:39.496260  567781 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:59:39.496374  567781 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 19:59:39.496387  567781 cache.go:56] Caching tarball of preloaded images
	I1205 19:59:39.496473  567781 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 19:59:39.496484  567781 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 19:59:39.496620  567781 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/multinode-346389/config.json ...
	I1205 19:59:39.496822  567781 start.go:360] acquireMachinesLock for multinode-346389: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 19:59:39.496873  567781 start.go:364] duration metric: took 26.528µs to acquireMachinesLock for "multinode-346389"
	I1205 19:59:39.496886  567781 start.go:96] Skipping create...Using existing machine configuration
	I1205 19:59:39.496892  567781 fix.go:54] fixHost starting: 
	I1205 19:59:39.497152  567781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:59:39.497184  567781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:59:39.512871  567781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40025
	I1205 19:59:39.513426  567781 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:59:39.513852  567781 main.go:141] libmachine: Using API Version  1
	I1205 19:59:39.513871  567781 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:59:39.514232  567781 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:59:39.514461  567781 main.go:141] libmachine: (multinode-346389) Calling .DriverName
	I1205 19:59:39.514617  567781 main.go:141] libmachine: (multinode-346389) Calling .GetState
	I1205 19:59:39.516291  567781 fix.go:112] recreateIfNeeded on multinode-346389: state=Running err=<nil>
	W1205 19:59:39.516313  567781 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 19:59:39.518292  567781 out.go:177] * Updating the running kvm2 "multinode-346389" VM ...
	I1205 19:59:39.519859  567781 machine.go:93] provisionDockerMachine start ...
	I1205 19:59:39.519887  567781 main.go:141] libmachine: (multinode-346389) Calling .DriverName
	I1205 19:59:39.520139  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHHostname
	I1205 19:59:39.522910  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:59:39.523360  567781 main.go:141] libmachine: (multinode-346389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:79:33", ip: ""} in network mk-multinode-346389: {Iface:virbr1 ExpiryTime:2024-12-05 20:54:07 +0000 UTC Type:0 Mac:52:54:00:5c:79:33 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-346389 Clientid:01:52:54:00:5c:79:33}
	I1205 19:59:39.523391  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined IP address 192.168.39.170 and MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:59:39.523535  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHPort
	I1205 19:59:39.523722  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHKeyPath
	I1205 19:59:39.523945  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHKeyPath
	I1205 19:59:39.524199  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHUsername
	I1205 19:59:39.524452  567781 main.go:141] libmachine: Using SSH client type: native
	I1205 19:59:39.524676  567781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I1205 19:59:39.524693  567781 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 19:59:39.634257  567781 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-346389
	
	I1205 19:59:39.634292  567781 main.go:141] libmachine: (multinode-346389) Calling .GetMachineName
	I1205 19:59:39.634594  567781 buildroot.go:166] provisioning hostname "multinode-346389"
	I1205 19:59:39.634626  567781 main.go:141] libmachine: (multinode-346389) Calling .GetMachineName
	I1205 19:59:39.634822  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHHostname
	I1205 19:59:39.637801  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:59:39.638266  567781 main.go:141] libmachine: (multinode-346389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:79:33", ip: ""} in network mk-multinode-346389: {Iface:virbr1 ExpiryTime:2024-12-05 20:54:07 +0000 UTC Type:0 Mac:52:54:00:5c:79:33 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-346389 Clientid:01:52:54:00:5c:79:33}
	I1205 19:59:39.638301  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined IP address 192.168.39.170 and MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:59:39.638416  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHPort
	I1205 19:59:39.638648  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHKeyPath
	I1205 19:59:39.638884  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHKeyPath
	I1205 19:59:39.639050  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHUsername
	I1205 19:59:39.639236  567781 main.go:141] libmachine: Using SSH client type: native
	I1205 19:59:39.639424  567781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I1205 19:59:39.639444  567781 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-346389 && echo "multinode-346389" | sudo tee /etc/hostname
	I1205 19:59:39.760732  567781 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-346389
	
	I1205 19:59:39.760766  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHHostname
	I1205 19:59:39.763651  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:59:39.764121  567781 main.go:141] libmachine: (multinode-346389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:79:33", ip: ""} in network mk-multinode-346389: {Iface:virbr1 ExpiryTime:2024-12-05 20:54:07 +0000 UTC Type:0 Mac:52:54:00:5c:79:33 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-346389 Clientid:01:52:54:00:5c:79:33}
	I1205 19:59:39.764157  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined IP address 192.168.39.170 and MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:59:39.764350  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHPort
	I1205 19:59:39.764549  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHKeyPath
	I1205 19:59:39.764786  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHKeyPath
	I1205 19:59:39.764960  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHUsername
	I1205 19:59:39.765148  567781 main.go:141] libmachine: Using SSH client type: native
	I1205 19:59:39.765382  567781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I1205 19:59:39.765402  567781 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-346389' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-346389/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-346389' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:59:39.873337  567781 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:59:39.873371  567781 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 19:59:39.873405  567781 buildroot.go:174] setting up certificates
	I1205 19:59:39.873415  567781 provision.go:84] configureAuth start
	I1205 19:59:39.873429  567781 main.go:141] libmachine: (multinode-346389) Calling .GetMachineName
	I1205 19:59:39.873686  567781 main.go:141] libmachine: (multinode-346389) Calling .GetIP
	I1205 19:59:39.876305  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:59:39.876678  567781 main.go:141] libmachine: (multinode-346389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:79:33", ip: ""} in network mk-multinode-346389: {Iface:virbr1 ExpiryTime:2024-12-05 20:54:07 +0000 UTC Type:0 Mac:52:54:00:5c:79:33 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-346389 Clientid:01:52:54:00:5c:79:33}
	I1205 19:59:39.876697  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined IP address 192.168.39.170 and MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:59:39.876893  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHHostname
	I1205 19:59:39.879105  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:59:39.879475  567781 main.go:141] libmachine: (multinode-346389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:79:33", ip: ""} in network mk-multinode-346389: {Iface:virbr1 ExpiryTime:2024-12-05 20:54:07 +0000 UTC Type:0 Mac:52:54:00:5c:79:33 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-346389 Clientid:01:52:54:00:5c:79:33}
	I1205 19:59:39.879512  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined IP address 192.168.39.170 and MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:59:39.879601  567781 provision.go:143] copyHostCerts
	I1205 19:59:39.879632  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:59:39.879664  567781 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 19:59:39.879683  567781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 19:59:39.879748  567781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 19:59:39.879823  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:59:39.879867  567781 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 19:59:39.879874  567781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 19:59:39.879899  567781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 19:59:39.879946  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:59:39.879962  567781 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 19:59:39.879968  567781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 19:59:39.879995  567781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 19:59:39.880050  567781 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.multinode-346389 san=[127.0.0.1 192.168.39.170 localhost minikube multinode-346389]
	I1205 19:59:40.032449  567781 provision.go:177] copyRemoteCerts
	I1205 19:59:40.032514  567781 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:59:40.032541  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHHostname
	I1205 19:59:40.035424  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:59:40.035897  567781 main.go:141] libmachine: (multinode-346389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:79:33", ip: ""} in network mk-multinode-346389: {Iface:virbr1 ExpiryTime:2024-12-05 20:54:07 +0000 UTC Type:0 Mac:52:54:00:5c:79:33 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-346389 Clientid:01:52:54:00:5c:79:33}
	I1205 19:59:40.035939  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined IP address 192.168.39.170 and MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:59:40.036239  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHPort
	I1205 19:59:40.036455  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHKeyPath
	I1205 19:59:40.036652  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHUsername
	I1205 19:59:40.036797  567781 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/multinode-346389/id_rsa Username:docker}
	I1205 19:59:40.118594  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 19:59:40.118688  567781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 19:59:40.145381  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 19:59:40.145448  567781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1205 19:59:40.170930  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 19:59:40.171012  567781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 19:59:40.199760  567781 provision.go:87] duration metric: took 326.326113ms to configureAuth
	I1205 19:59:40.199795  567781 buildroot.go:189] setting minikube options for container-runtime
	I1205 19:59:40.200034  567781 config.go:182] Loaded profile config "multinode-346389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:59:40.200115  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHHostname
	I1205 19:59:40.202782  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:59:40.203187  567781 main.go:141] libmachine: (multinode-346389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:79:33", ip: ""} in network mk-multinode-346389: {Iface:virbr1 ExpiryTime:2024-12-05 20:54:07 +0000 UTC Type:0 Mac:52:54:00:5c:79:33 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-346389 Clientid:01:52:54:00:5c:79:33}
	I1205 19:59:40.203223  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined IP address 192.168.39.170 and MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:59:40.203437  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHPort
	I1205 19:59:40.203658  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHKeyPath
	I1205 19:59:40.203825  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHKeyPath
	I1205 19:59:40.203942  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHUsername
	I1205 19:59:40.204140  567781 main.go:141] libmachine: Using SSH client type: native
	I1205 19:59:40.204336  567781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I1205 19:59:40.204358  567781 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:01:11.016282  567781 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:01:11.016331  567781 machine.go:96] duration metric: took 1m31.496451671s to provisionDockerMachine
	I1205 20:01:11.016349  567781 start.go:293] postStartSetup for "multinode-346389" (driver="kvm2")
	I1205 20:01:11.016376  567781 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:01:11.016398  567781 main.go:141] libmachine: (multinode-346389) Calling .DriverName
	I1205 20:01:11.016712  567781 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:01:11.016748  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHHostname
	I1205 20:01:11.020041  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 20:01:11.020451  567781 main.go:141] libmachine: (multinode-346389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:79:33", ip: ""} in network mk-multinode-346389: {Iface:virbr1 ExpiryTime:2024-12-05 20:54:07 +0000 UTC Type:0 Mac:52:54:00:5c:79:33 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-346389 Clientid:01:52:54:00:5c:79:33}
	I1205 20:01:11.020475  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined IP address 192.168.39.170 and MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 20:01:11.020611  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHPort
	I1205 20:01:11.020831  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHKeyPath
	I1205 20:01:11.021002  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHUsername
	I1205 20:01:11.021161  567781 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/multinode-346389/id_rsa Username:docker}
	I1205 20:01:11.108501  567781 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:01:11.113266  567781 command_runner.go:130] > NAME=Buildroot
	I1205 20:01:11.113295  567781 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1205 20:01:11.113302  567781 command_runner.go:130] > ID=buildroot
	I1205 20:01:11.113309  567781 command_runner.go:130] > VERSION_ID=2023.02.9
	I1205 20:01:11.113317  567781 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1205 20:01:11.113364  567781 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:01:11.113382  567781 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:01:11.113448  567781 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:01:11.113521  567781 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:01:11.113532  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /etc/ssl/certs/5381862.pem
	I1205 20:01:11.113620  567781 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:01:11.123521  567781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:01:11.149475  567781 start.go:296] duration metric: took 133.109918ms for postStartSetup
	I1205 20:01:11.149526  567781 fix.go:56] duration metric: took 1m31.6526341s for fixHost
	I1205 20:01:11.149552  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHHostname
	I1205 20:01:11.152469  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 20:01:11.152891  567781 main.go:141] libmachine: (multinode-346389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:79:33", ip: ""} in network mk-multinode-346389: {Iface:virbr1 ExpiryTime:2024-12-05 20:54:07 +0000 UTC Type:0 Mac:52:54:00:5c:79:33 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-346389 Clientid:01:52:54:00:5c:79:33}
	I1205 20:01:11.152915  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined IP address 192.168.39.170 and MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 20:01:11.153149  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHPort
	I1205 20:01:11.153385  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHKeyPath
	I1205 20:01:11.153549  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHKeyPath
	I1205 20:01:11.153691  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHUsername
	I1205 20:01:11.153901  567781 main.go:141] libmachine: Using SSH client type: native
	I1205 20:01:11.154150  567781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I1205 20:01:11.154169  567781 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:01:11.257364  567781 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733428871.234345561
	
	I1205 20:01:11.257397  567781 fix.go:216] guest clock: 1733428871.234345561
	I1205 20:01:11.257407  567781 fix.go:229] Guest: 2024-12-05 20:01:11.234345561 +0000 UTC Remote: 2024-12-05 20:01:11.149534402 +0000 UTC m=+91.792579499 (delta=84.811159ms)
	I1205 20:01:11.257462  567781 fix.go:200] guest clock delta is within tolerance: 84.811159ms
	I1205 20:01:11.257472  567781 start.go:83] releasing machines lock for "multinode-346389", held for 1m31.760590935s
	I1205 20:01:11.257523  567781 main.go:141] libmachine: (multinode-346389) Calling .DriverName
	I1205 20:01:11.257862  567781 main.go:141] libmachine: (multinode-346389) Calling .GetIP
	I1205 20:01:11.260930  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 20:01:11.261381  567781 main.go:141] libmachine: (multinode-346389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:79:33", ip: ""} in network mk-multinode-346389: {Iface:virbr1 ExpiryTime:2024-12-05 20:54:07 +0000 UTC Type:0 Mac:52:54:00:5c:79:33 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-346389 Clientid:01:52:54:00:5c:79:33}
	I1205 20:01:11.261403  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined IP address 192.168.39.170 and MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 20:01:11.261610  567781 main.go:141] libmachine: (multinode-346389) Calling .DriverName
	I1205 20:01:11.262223  567781 main.go:141] libmachine: (multinode-346389) Calling .DriverName
	I1205 20:01:11.262421  567781 main.go:141] libmachine: (multinode-346389) Calling .DriverName
	I1205 20:01:11.262525  567781 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:01:11.262574  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHHostname
	I1205 20:01:11.262706  567781 ssh_runner.go:195] Run: cat /version.json
	I1205 20:01:11.262732  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHHostname
	I1205 20:01:11.265487  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 20:01:11.265512  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 20:01:11.265940  567781 main.go:141] libmachine: (multinode-346389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:79:33", ip: ""} in network mk-multinode-346389: {Iface:virbr1 ExpiryTime:2024-12-05 20:54:07 +0000 UTC Type:0 Mac:52:54:00:5c:79:33 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-346389 Clientid:01:52:54:00:5c:79:33}
	I1205 20:01:11.266004  567781 main.go:141] libmachine: (multinode-346389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:79:33", ip: ""} in network mk-multinode-346389: {Iface:virbr1 ExpiryTime:2024-12-05 20:54:07 +0000 UTC Type:0 Mac:52:54:00:5c:79:33 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-346389 Clientid:01:52:54:00:5c:79:33}
	I1205 20:01:11.266029  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined IP address 192.168.39.170 and MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 20:01:11.266053  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined IP address 192.168.39.170 and MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 20:01:11.266130  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHPort
	I1205 20:01:11.266252  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHPort
	I1205 20:01:11.266331  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHKeyPath
	I1205 20:01:11.266423  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHKeyPath
	I1205 20:01:11.266472  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHUsername
	I1205 20:01:11.266545  567781 main.go:141] libmachine: (multinode-346389) Calling .GetSSHUsername
	I1205 20:01:11.266612  567781 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/multinode-346389/id_rsa Username:docker}
	I1205 20:01:11.266665  567781 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/multinode-346389/id_rsa Username:docker}
	I1205 20:01:11.362407  567781 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1205 20:01:11.363427  567781 command_runner.go:130] > {"iso_version": "v1.34.0-1730913550-19917", "kicbase_version": "v0.0.45-1730888964-19917", "minikube_version": "v1.34.0", "commit": "72f43dde5d92c8ae490d0727dad53fb3ed6aa41e"}
	I1205 20:01:11.363572  567781 ssh_runner.go:195] Run: systemctl --version
	I1205 20:01:11.376813  567781 command_runner.go:130] > systemd 252 (252)
	I1205 20:01:11.376888  567781 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1205 20:01:11.377460  567781 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:01:11.543557  567781 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 20:01:11.552396  567781 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1205 20:01:11.552803  567781 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:01:11.552886  567781 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:01:11.563824  567781 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 20:01:11.563854  567781 start.go:495] detecting cgroup driver to use...
	I1205 20:01:11.563935  567781 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:01:11.583402  567781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:01:11.598068  567781 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:01:11.598145  567781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:01:11.613054  567781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:01:11.627534  567781 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:01:11.769698  567781 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:01:11.919319  567781 docker.go:233] disabling docker service ...
	I1205 20:01:11.919387  567781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:01:11.937422  567781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:01:11.952413  567781 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:01:12.097049  567781 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:01:12.233683  567781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:01:12.247963  567781 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:01:12.267382  567781 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1205 20:01:12.267814  567781 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:01:12.267892  567781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:01:12.279940  567781 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:01:12.280029  567781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:01:12.290636  567781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:01:12.301203  567781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:01:12.311967  567781 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:01:12.323374  567781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:01:12.335234  567781 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:01:12.347290  567781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:01:12.357956  567781 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:01:12.367536  567781 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1205 20:01:12.367700  567781 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:01:12.377244  567781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:01:12.513634  567781 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:01:12.725207  567781 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:01:12.725285  567781 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:01:12.730356  567781 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1205 20:01:12.730373  567781 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1205 20:01:12.730414  567781 command_runner.go:130] > Device: 0,22	Inode: 1290        Links: 1
	I1205 20:01:12.730427  567781 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 20:01:12.730433  567781 command_runner.go:130] > Access: 2024-12-05 20:01:12.589770690 +0000
	I1205 20:01:12.730446  567781 command_runner.go:130] > Modify: 2024-12-05 20:01:12.589770690 +0000
	I1205 20:01:12.730453  567781 command_runner.go:130] > Change: 2024-12-05 20:01:12.589770690 +0000
	I1205 20:01:12.730463  567781 command_runner.go:130] >  Birth: -
	I1205 20:01:12.730502  567781 start.go:563] Will wait 60s for crictl version
	I1205 20:01:12.730556  567781 ssh_runner.go:195] Run: which crictl
	I1205 20:01:12.734557  567781 command_runner.go:130] > /usr/bin/crictl
	I1205 20:01:12.734654  567781 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:01:12.771346  567781 command_runner.go:130] > Version:  0.1.0
	I1205 20:01:12.771373  567781 command_runner.go:130] > RuntimeName:  cri-o
	I1205 20:01:12.771380  567781 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1205 20:01:12.771389  567781 command_runner.go:130] > RuntimeApiVersion:  v1
	I1205 20:01:12.772539  567781 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:01:12.772633  567781 ssh_runner.go:195] Run: crio --version
	I1205 20:01:12.801596  567781 command_runner.go:130] > crio version 1.29.1
	I1205 20:01:12.801625  567781 command_runner.go:130] > Version:        1.29.1
	I1205 20:01:12.801634  567781 command_runner.go:130] > GitCommit:      unknown
	I1205 20:01:12.801642  567781 command_runner.go:130] > GitCommitDate:  unknown
	I1205 20:01:12.801647  567781 command_runner.go:130] > GitTreeState:   clean
	I1205 20:01:12.801659  567781 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1205 20:01:12.801672  567781 command_runner.go:130] > GoVersion:      go1.21.6
	I1205 20:01:12.801679  567781 command_runner.go:130] > Compiler:       gc
	I1205 20:01:12.801687  567781 command_runner.go:130] > Platform:       linux/amd64
	I1205 20:01:12.801695  567781 command_runner.go:130] > Linkmode:       dynamic
	I1205 20:01:12.801705  567781 command_runner.go:130] > BuildTags:      
	I1205 20:01:12.801713  567781 command_runner.go:130] >   containers_image_ostree_stub
	I1205 20:01:12.801723  567781 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1205 20:01:12.801729  567781 command_runner.go:130] >   btrfs_noversion
	I1205 20:01:12.801739  567781 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1205 20:01:12.801746  567781 command_runner.go:130] >   libdm_no_deferred_remove
	I1205 20:01:12.801754  567781 command_runner.go:130] >   seccomp
	I1205 20:01:12.801760  567781 command_runner.go:130] > LDFlags:          unknown
	I1205 20:01:12.801812  567781 command_runner.go:130] > SeccompEnabled:   true
	I1205 20:01:12.801836  567781 command_runner.go:130] > AppArmorEnabled:  false
	I1205 20:01:12.803030  567781 ssh_runner.go:195] Run: crio --version
	I1205 20:01:12.830598  567781 command_runner.go:130] > crio version 1.29.1
	I1205 20:01:12.830631  567781 command_runner.go:130] > Version:        1.29.1
	I1205 20:01:12.830640  567781 command_runner.go:130] > GitCommit:      unknown
	I1205 20:01:12.830648  567781 command_runner.go:130] > GitCommitDate:  unknown
	I1205 20:01:12.830655  567781 command_runner.go:130] > GitTreeState:   clean
	I1205 20:01:12.830664  567781 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1205 20:01:12.830671  567781 command_runner.go:130] > GoVersion:      go1.21.6
	I1205 20:01:12.830678  567781 command_runner.go:130] > Compiler:       gc
	I1205 20:01:12.830689  567781 command_runner.go:130] > Platform:       linux/amd64
	I1205 20:01:12.830696  567781 command_runner.go:130] > Linkmode:       dynamic
	I1205 20:01:12.830707  567781 command_runner.go:130] > BuildTags:      
	I1205 20:01:12.830715  567781 command_runner.go:130] >   containers_image_ostree_stub
	I1205 20:01:12.830725  567781 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1205 20:01:12.830732  567781 command_runner.go:130] >   btrfs_noversion
	I1205 20:01:12.830743  567781 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1205 20:01:12.830758  567781 command_runner.go:130] >   libdm_no_deferred_remove
	I1205 20:01:12.830767  567781 command_runner.go:130] >   seccomp
	I1205 20:01:12.830774  567781 command_runner.go:130] > LDFlags:          unknown
	I1205 20:01:12.830783  567781 command_runner.go:130] > SeccompEnabled:   true
	I1205 20:01:12.830791  567781 command_runner.go:130] > AppArmorEnabled:  false
	I1205 20:01:12.834139  567781 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:01:12.835756  567781 main.go:141] libmachine: (multinode-346389) Calling .GetIP
	I1205 20:01:12.838685  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 20:01:12.839081  567781 main.go:141] libmachine: (multinode-346389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:79:33", ip: ""} in network mk-multinode-346389: {Iface:virbr1 ExpiryTime:2024-12-05 20:54:07 +0000 UTC Type:0 Mac:52:54:00:5c:79:33 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-346389 Clientid:01:52:54:00:5c:79:33}
	I1205 20:01:12.839112  567781 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined IP address 192.168.39.170 and MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 20:01:12.839308  567781 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:01:12.844041  567781 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1205 20:01:12.844146  567781 kubeadm.go:883] updating cluster {Name:multinode-346389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-346389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.125 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:01:12.844323  567781 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:01:12.844374  567781 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:01:12.890952  567781 command_runner.go:130] > {
	I1205 20:01:12.890983  567781 command_runner.go:130] >   "images": [
	I1205 20:01:12.890989  567781 command_runner.go:130] >     {
	I1205 20:01:12.891000  567781 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1205 20:01:12.891007  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.891014  567781 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1205 20:01:12.891020  567781 command_runner.go:130] >       ],
	I1205 20:01:12.891025  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.891037  567781 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1205 20:01:12.891065  567781 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1205 20:01:12.891074  567781 command_runner.go:130] >       ],
	I1205 20:01:12.891080  567781 command_runner.go:130] >       "size": "94965812",
	I1205 20:01:12.891089  567781 command_runner.go:130] >       "uid": null,
	I1205 20:01:12.891102  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.891113  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.891120  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.891125  567781 command_runner.go:130] >     },
	I1205 20:01:12.891130  567781 command_runner.go:130] >     {
	I1205 20:01:12.891139  567781 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1205 20:01:12.891147  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.891156  567781 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1205 20:01:12.891164  567781 command_runner.go:130] >       ],
	I1205 20:01:12.891171  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.891184  567781 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1205 20:01:12.891196  567781 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1205 20:01:12.891205  567781 command_runner.go:130] >       ],
	I1205 20:01:12.891214  567781 command_runner.go:130] >       "size": "94958644",
	I1205 20:01:12.891222  567781 command_runner.go:130] >       "uid": null,
	I1205 20:01:12.891235  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.891244  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.891253  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.891261  567781 command_runner.go:130] >     },
	I1205 20:01:12.891266  567781 command_runner.go:130] >     {
	I1205 20:01:12.891278  567781 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1205 20:01:12.891288  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.891299  567781 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1205 20:01:12.891308  567781 command_runner.go:130] >       ],
	I1205 20:01:12.891316  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.891327  567781 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1205 20:01:12.891338  567781 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1205 20:01:12.891347  567781 command_runner.go:130] >       ],
	I1205 20:01:12.891359  567781 command_runner.go:130] >       "size": "1363676",
	I1205 20:01:12.891368  567781 command_runner.go:130] >       "uid": null,
	I1205 20:01:12.891377  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.891386  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.891396  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.891419  567781 command_runner.go:130] >     },
	I1205 20:01:12.891428  567781 command_runner.go:130] >     {
	I1205 20:01:12.891437  567781 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1205 20:01:12.891445  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.891456  567781 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1205 20:01:12.891465  567781 command_runner.go:130] >       ],
	I1205 20:01:12.891475  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.891491  567781 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1205 20:01:12.891513  567781 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1205 20:01:12.891520  567781 command_runner.go:130] >       ],
	I1205 20:01:12.891527  567781 command_runner.go:130] >       "size": "31470524",
	I1205 20:01:12.891536  567781 command_runner.go:130] >       "uid": null,
	I1205 20:01:12.891543  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.891552  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.891562  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.891570  567781 command_runner.go:130] >     },
	I1205 20:01:12.891576  567781 command_runner.go:130] >     {
	I1205 20:01:12.891589  567781 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1205 20:01:12.891598  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.891608  567781 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1205 20:01:12.891615  567781 command_runner.go:130] >       ],
	I1205 20:01:12.891621  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.891634  567781 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1205 20:01:12.891648  567781 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1205 20:01:12.891656  567781 command_runner.go:130] >       ],
	I1205 20:01:12.891662  567781 command_runner.go:130] >       "size": "63273227",
	I1205 20:01:12.891670  567781 command_runner.go:130] >       "uid": null,
	I1205 20:01:12.891675  567781 command_runner.go:130] >       "username": "nonroot",
	I1205 20:01:12.891684  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.891693  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.891698  567781 command_runner.go:130] >     },
	I1205 20:01:12.891707  567781 command_runner.go:130] >     {
	I1205 20:01:12.891719  567781 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1205 20:01:12.891738  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.891749  567781 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1205 20:01:12.891758  567781 command_runner.go:130] >       ],
	I1205 20:01:12.891766  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.891780  567781 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1205 20:01:12.891795  567781 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1205 20:01:12.891805  567781 command_runner.go:130] >       ],
	I1205 20:01:12.891815  567781 command_runner.go:130] >       "size": "149009664",
	I1205 20:01:12.891824  567781 command_runner.go:130] >       "uid": {
	I1205 20:01:12.891830  567781 command_runner.go:130] >         "value": "0"
	I1205 20:01:12.891838  567781 command_runner.go:130] >       },
	I1205 20:01:12.891845  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.891855  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.891865  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.891873  567781 command_runner.go:130] >     },
	I1205 20:01:12.891878  567781 command_runner.go:130] >     {
	I1205 20:01:12.891890  567781 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1205 20:01:12.891899  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.891910  567781 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1205 20:01:12.891919  567781 command_runner.go:130] >       ],
	I1205 20:01:12.891928  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.891941  567781 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1205 20:01:12.891958  567781 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1205 20:01:12.891965  567781 command_runner.go:130] >       ],
	I1205 20:01:12.891970  567781 command_runner.go:130] >       "size": "95274464",
	I1205 20:01:12.891982  567781 command_runner.go:130] >       "uid": {
	I1205 20:01:12.891986  567781 command_runner.go:130] >         "value": "0"
	I1205 20:01:12.891990  567781 command_runner.go:130] >       },
	I1205 20:01:12.891993  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.891999  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.892005  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.892011  567781 command_runner.go:130] >     },
	I1205 20:01:12.892016  567781 command_runner.go:130] >     {
	I1205 20:01:12.892038  567781 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1205 20:01:12.892045  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.892065  567781 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1205 20:01:12.892071  567781 command_runner.go:130] >       ],
	I1205 20:01:12.892077  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.892104  567781 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1205 20:01:12.892115  567781 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1205 20:01:12.892118  567781 command_runner.go:130] >       ],
	I1205 20:01:12.892122  567781 command_runner.go:130] >       "size": "89474374",
	I1205 20:01:12.892126  567781 command_runner.go:130] >       "uid": {
	I1205 20:01:12.892129  567781 command_runner.go:130] >         "value": "0"
	I1205 20:01:12.892133  567781 command_runner.go:130] >       },
	I1205 20:01:12.892137  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.892140  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.892144  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.892147  567781 command_runner.go:130] >     },
	I1205 20:01:12.892151  567781 command_runner.go:130] >     {
	I1205 20:01:12.892156  567781 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1205 20:01:12.892177  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.892193  567781 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1205 20:01:12.892197  567781 command_runner.go:130] >       ],
	I1205 20:01:12.892202  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.892210  567781 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1205 20:01:12.892216  567781 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1205 20:01:12.892222  567781 command_runner.go:130] >       ],
	I1205 20:01:12.892226  567781 command_runner.go:130] >       "size": "92783513",
	I1205 20:01:12.892230  567781 command_runner.go:130] >       "uid": null,
	I1205 20:01:12.892235  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.892238  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.892242  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.892246  567781 command_runner.go:130] >     },
	I1205 20:01:12.892250  567781 command_runner.go:130] >     {
	I1205 20:01:12.892256  567781 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1205 20:01:12.892286  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.892295  567781 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1205 20:01:12.892302  567781 command_runner.go:130] >       ],
	I1205 20:01:12.892307  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.892314  567781 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1205 20:01:12.892322  567781 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1205 20:01:12.892326  567781 command_runner.go:130] >       ],
	I1205 20:01:12.892330  567781 command_runner.go:130] >       "size": "68457798",
	I1205 20:01:12.892335  567781 command_runner.go:130] >       "uid": {
	I1205 20:01:12.892338  567781 command_runner.go:130] >         "value": "0"
	I1205 20:01:12.892341  567781 command_runner.go:130] >       },
	I1205 20:01:12.892346  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.892350  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.892353  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.892357  567781 command_runner.go:130] >     },
	I1205 20:01:12.892360  567781 command_runner.go:130] >     {
	I1205 20:01:12.892366  567781 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1205 20:01:12.892370  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.892375  567781 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1205 20:01:12.892379  567781 command_runner.go:130] >       ],
	I1205 20:01:12.892383  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.892390  567781 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1205 20:01:12.892400  567781 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1205 20:01:12.892406  567781 command_runner.go:130] >       ],
	I1205 20:01:12.892412  567781 command_runner.go:130] >       "size": "742080",
	I1205 20:01:12.892422  567781 command_runner.go:130] >       "uid": {
	I1205 20:01:12.892429  567781 command_runner.go:130] >         "value": "65535"
	I1205 20:01:12.892435  567781 command_runner.go:130] >       },
	I1205 20:01:12.892441  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.892449  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.892455  567781 command_runner.go:130] >       "pinned": true
	I1205 20:01:12.892462  567781 command_runner.go:130] >     }
	I1205 20:01:12.892465  567781 command_runner.go:130] >   ]
	I1205 20:01:12.892474  567781 command_runner.go:130] > }
	I1205 20:01:12.892691  567781 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:01:12.892704  567781 crio.go:433] Images already preloaded, skipping extraction
	I1205 20:01:12.892758  567781 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:01:12.924244  567781 command_runner.go:130] > {
	I1205 20:01:12.924286  567781 command_runner.go:130] >   "images": [
	I1205 20:01:12.924293  567781 command_runner.go:130] >     {
	I1205 20:01:12.924306  567781 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1205 20:01:12.924314  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.924323  567781 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1205 20:01:12.924334  567781 command_runner.go:130] >       ],
	I1205 20:01:12.924342  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.924357  567781 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1205 20:01:12.924373  567781 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1205 20:01:12.924380  567781 command_runner.go:130] >       ],
	I1205 20:01:12.924388  567781 command_runner.go:130] >       "size": "94965812",
	I1205 20:01:12.924397  567781 command_runner.go:130] >       "uid": null,
	I1205 20:01:12.924403  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.924423  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.924434  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.924440  567781 command_runner.go:130] >     },
	I1205 20:01:12.924447  567781 command_runner.go:130] >     {
	I1205 20:01:12.924458  567781 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1205 20:01:12.924468  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.924477  567781 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1205 20:01:12.924486  567781 command_runner.go:130] >       ],
	I1205 20:01:12.924496  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.924511  567781 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1205 20:01:12.924527  567781 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1205 20:01:12.924537  567781 command_runner.go:130] >       ],
	I1205 20:01:12.924551  567781 command_runner.go:130] >       "size": "94958644",
	I1205 20:01:12.924561  567781 command_runner.go:130] >       "uid": null,
	I1205 20:01:12.924573  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.924582  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.924590  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.924598  567781 command_runner.go:130] >     },
	I1205 20:01:12.924606  567781 command_runner.go:130] >     {
	I1205 20:01:12.924615  567781 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1205 20:01:12.924624  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.924632  567781 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1205 20:01:12.924640  567781 command_runner.go:130] >       ],
	I1205 20:01:12.924646  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.924656  567781 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1205 20:01:12.924669  567781 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1205 20:01:12.924679  567781 command_runner.go:130] >       ],
	I1205 20:01:12.924686  567781 command_runner.go:130] >       "size": "1363676",
	I1205 20:01:12.924695  567781 command_runner.go:130] >       "uid": null,
	I1205 20:01:12.924701  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.924713  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.924722  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.924730  567781 command_runner.go:130] >     },
	I1205 20:01:12.924735  567781 command_runner.go:130] >     {
	I1205 20:01:12.924747  567781 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1205 20:01:12.924753  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.924764  567781 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1205 20:01:12.924773  567781 command_runner.go:130] >       ],
	I1205 20:01:12.924780  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.924795  567781 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1205 20:01:12.924820  567781 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1205 20:01:12.924830  567781 command_runner.go:130] >       ],
	I1205 20:01:12.924837  567781 command_runner.go:130] >       "size": "31470524",
	I1205 20:01:12.924846  567781 command_runner.go:130] >       "uid": null,
	I1205 20:01:12.924852  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.924866  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.924875  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.924880  567781 command_runner.go:130] >     },
	I1205 20:01:12.924888  567781 command_runner.go:130] >     {
	I1205 20:01:12.924897  567781 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1205 20:01:12.924906  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.924915  567781 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1205 20:01:12.924923  567781 command_runner.go:130] >       ],
	I1205 20:01:12.924929  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.924943  567781 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1205 20:01:12.924957  567781 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1205 20:01:12.924964  567781 command_runner.go:130] >       ],
	I1205 20:01:12.924973  567781 command_runner.go:130] >       "size": "63273227",
	I1205 20:01:12.924979  567781 command_runner.go:130] >       "uid": null,
	I1205 20:01:12.924989  567781 command_runner.go:130] >       "username": "nonroot",
	I1205 20:01:12.924995  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.925005  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.925011  567781 command_runner.go:130] >     },
	I1205 20:01:12.925020  567781 command_runner.go:130] >     {
	I1205 20:01:12.925030  567781 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1205 20:01:12.925040  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.925047  567781 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1205 20:01:12.925059  567781 command_runner.go:130] >       ],
	I1205 20:01:12.925073  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.925087  567781 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1205 20:01:12.925102  567781 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1205 20:01:12.925111  567781 command_runner.go:130] >       ],
	I1205 20:01:12.925118  567781 command_runner.go:130] >       "size": "149009664",
	I1205 20:01:12.925126  567781 command_runner.go:130] >       "uid": {
	I1205 20:01:12.925132  567781 command_runner.go:130] >         "value": "0"
	I1205 20:01:12.925146  567781 command_runner.go:130] >       },
	I1205 20:01:12.925154  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.925159  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.925175  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.925184  567781 command_runner.go:130] >     },
	I1205 20:01:12.925190  567781 command_runner.go:130] >     {
	I1205 20:01:12.925202  567781 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1205 20:01:12.925211  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.925219  567781 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1205 20:01:12.925227  567781 command_runner.go:130] >       ],
	I1205 20:01:12.925232  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.925246  567781 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1205 20:01:12.925259  567781 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1205 20:01:12.925265  567781 command_runner.go:130] >       ],
	I1205 20:01:12.925275  567781 command_runner.go:130] >       "size": "95274464",
	I1205 20:01:12.925281  567781 command_runner.go:130] >       "uid": {
	I1205 20:01:12.925291  567781 command_runner.go:130] >         "value": "0"
	I1205 20:01:12.925297  567781 command_runner.go:130] >       },
	I1205 20:01:12.925306  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.925311  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.925320  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.925325  567781 command_runner.go:130] >     },
	I1205 20:01:12.925330  567781 command_runner.go:130] >     {
	I1205 20:01:12.925342  567781 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1205 20:01:12.925351  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.925362  567781 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1205 20:01:12.925369  567781 command_runner.go:130] >       ],
	I1205 20:01:12.925374  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.925411  567781 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1205 20:01:12.925427  567781 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1205 20:01:12.925433  567781 command_runner.go:130] >       ],
	I1205 20:01:12.925438  567781 command_runner.go:130] >       "size": "89474374",
	I1205 20:01:12.925447  567781 command_runner.go:130] >       "uid": {
	I1205 20:01:12.925453  567781 command_runner.go:130] >         "value": "0"
	I1205 20:01:12.925458  567781 command_runner.go:130] >       },
	I1205 20:01:12.925466  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.925480  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.925491  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.925496  567781 command_runner.go:130] >     },
	I1205 20:01:12.925506  567781 command_runner.go:130] >     {
	I1205 20:01:12.925515  567781 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1205 20:01:12.925525  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.925536  567781 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1205 20:01:12.925544  567781 command_runner.go:130] >       ],
	I1205 20:01:12.925551  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.925562  567781 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1205 20:01:12.925582  567781 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1205 20:01:12.925591  567781 command_runner.go:130] >       ],
	I1205 20:01:12.925598  567781 command_runner.go:130] >       "size": "92783513",
	I1205 20:01:12.925608  567781 command_runner.go:130] >       "uid": null,
	I1205 20:01:12.925615  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.925624  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.925633  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.925639  567781 command_runner.go:130] >     },
	I1205 20:01:12.925648  567781 command_runner.go:130] >     {
	I1205 20:01:12.925656  567781 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1205 20:01:12.925666  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.925673  567781 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1205 20:01:12.925682  567781 command_runner.go:130] >       ],
	I1205 20:01:12.925689  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.925703  567781 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1205 20:01:12.925723  567781 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1205 20:01:12.925727  567781 command_runner.go:130] >       ],
	I1205 20:01:12.925733  567781 command_runner.go:130] >       "size": "68457798",
	I1205 20:01:12.925738  567781 command_runner.go:130] >       "uid": {
	I1205 20:01:12.925744  567781 command_runner.go:130] >         "value": "0"
	I1205 20:01:12.925749  567781 command_runner.go:130] >       },
	I1205 20:01:12.925755  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.925761  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.925779  567781 command_runner.go:130] >       "pinned": false
	I1205 20:01:12.925786  567781 command_runner.go:130] >     },
	I1205 20:01:12.925795  567781 command_runner.go:130] >     {
	I1205 20:01:12.925804  567781 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1205 20:01:12.925812  567781 command_runner.go:130] >       "repoTags": [
	I1205 20:01:12.925819  567781 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1205 20:01:12.925825  567781 command_runner.go:130] >       ],
	I1205 20:01:12.925831  567781 command_runner.go:130] >       "repoDigests": [
	I1205 20:01:12.925843  567781 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1205 20:01:12.925859  567781 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1205 20:01:12.925866  567781 command_runner.go:130] >       ],
	I1205 20:01:12.925872  567781 command_runner.go:130] >       "size": "742080",
	I1205 20:01:12.925879  567781 command_runner.go:130] >       "uid": {
	I1205 20:01:12.925885  567781 command_runner.go:130] >         "value": "65535"
	I1205 20:01:12.925891  567781 command_runner.go:130] >       },
	I1205 20:01:12.925896  567781 command_runner.go:130] >       "username": "",
	I1205 20:01:12.925900  567781 command_runner.go:130] >       "spec": null,
	I1205 20:01:12.925906  567781 command_runner.go:130] >       "pinned": true
	I1205 20:01:12.925909  567781 command_runner.go:130] >     }
	I1205 20:01:12.925914  567781 command_runner.go:130] >   ]
	I1205 20:01:12.925921  567781 command_runner.go:130] > }
	I1205 20:01:12.926163  567781 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:01:12.926187  567781 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:01:12.926198  567781 kubeadm.go:934] updating node { 192.168.39.170 8443 v1.31.2 crio true true} ...
	I1205 20:01:12.926368  567781 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-346389 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-346389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:01:12.926464  567781 ssh_runner.go:195] Run: crio config
	I1205 20:01:12.970113  567781 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1205 20:01:12.970150  567781 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1205 20:01:12.970162  567781 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1205 20:01:12.970168  567781 command_runner.go:130] > #
	I1205 20:01:12.970179  567781 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1205 20:01:12.970186  567781 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1205 20:01:12.970193  567781 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1205 20:01:12.970200  567781 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1205 20:01:12.970204  567781 command_runner.go:130] > # reload'.
	I1205 20:01:12.970210  567781 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1205 20:01:12.970218  567781 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1205 20:01:12.970229  567781 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1205 20:01:12.970258  567781 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1205 20:01:12.970268  567781 command_runner.go:130] > [crio]
	I1205 20:01:12.970274  567781 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1205 20:01:12.970284  567781 command_runner.go:130] > # containers images, in this directory.
	I1205 20:01:12.970289  567781 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1205 20:01:12.970301  567781 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1205 20:01:12.970306  567781 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1205 20:01:12.970317  567781 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1205 20:01:12.970321  567781 command_runner.go:130] > # imagestore = ""
	I1205 20:01:12.970328  567781 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1205 20:01:12.970335  567781 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1205 20:01:12.970340  567781 command_runner.go:130] > storage_driver = "overlay"
	I1205 20:01:12.970346  567781 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1205 20:01:12.970353  567781 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1205 20:01:12.970357  567781 command_runner.go:130] > storage_option = [
	I1205 20:01:12.970363  567781 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1205 20:01:12.970367  567781 command_runner.go:130] > ]
	I1205 20:01:12.970373  567781 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1205 20:01:12.970386  567781 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1205 20:01:12.970397  567781 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1205 20:01:12.970410  567781 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1205 20:01:12.970420  567781 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1205 20:01:12.970431  567781 command_runner.go:130] > # always happen on a node reboot
	I1205 20:01:12.970438  567781 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1205 20:01:12.970463  567781 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1205 20:01:12.970475  567781 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1205 20:01:12.970484  567781 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1205 20:01:12.970496  567781 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1205 20:01:12.970508  567781 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1205 20:01:12.970524  567781 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1205 20:01:12.970533  567781 command_runner.go:130] > # internal_wipe = true
	I1205 20:01:12.970545  567781 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1205 20:01:12.970557  567781 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1205 20:01:12.970576  567781 command_runner.go:130] > # internal_repair = false
	I1205 20:01:12.970585  567781 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1205 20:01:12.970594  567781 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1205 20:01:12.970604  567781 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1205 20:01:12.970615  567781 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1205 20:01:12.970629  567781 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1205 20:01:12.970639  567781 command_runner.go:130] > [crio.api]
	I1205 20:01:12.970650  567781 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1205 20:01:12.970661  567781 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1205 20:01:12.970672  567781 command_runner.go:130] > # IP address on which the stream server will listen.
	I1205 20:01:12.970682  567781 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1205 20:01:12.970697  567781 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1205 20:01:12.970708  567781 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1205 20:01:12.970718  567781 command_runner.go:130] > # stream_port = "0"
	I1205 20:01:12.970730  567781 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1205 20:01:12.970739  567781 command_runner.go:130] > # stream_enable_tls = false
	I1205 20:01:12.970758  567781 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1205 20:01:12.970770  567781 command_runner.go:130] > # stream_idle_timeout = ""
	I1205 20:01:12.970783  567781 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1205 20:01:12.970797  567781 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1205 20:01:12.970809  567781 command_runner.go:130] > # minutes.
	I1205 20:01:12.970819  567781 command_runner.go:130] > # stream_tls_cert = ""
	I1205 20:01:12.970839  567781 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1205 20:01:12.970853  567781 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1205 20:01:12.970863  567781 command_runner.go:130] > # stream_tls_key = ""
	I1205 20:01:12.970873  567781 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1205 20:01:12.970887  567781 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1205 20:01:12.970914  567781 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1205 20:01:12.970925  567781 command_runner.go:130] > # stream_tls_ca = ""
	I1205 20:01:12.970940  567781 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1205 20:01:12.970952  567781 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1205 20:01:12.970962  567781 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1205 20:01:12.970968  567781 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1205 20:01:12.970985  567781 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1205 20:01:12.970997  567781 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1205 20:01:12.971003  567781 command_runner.go:130] > [crio.runtime]
	I1205 20:01:12.971015  567781 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1205 20:01:12.971024  567781 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1205 20:01:12.971032  567781 command_runner.go:130] > # "nofile=1024:2048"
	I1205 20:01:12.971042  567781 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1205 20:01:12.971051  567781 command_runner.go:130] > # default_ulimits = [
	I1205 20:01:12.971056  567781 command_runner.go:130] > # ]
	I1205 20:01:12.971070  567781 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1205 20:01:12.971081  567781 command_runner.go:130] > # no_pivot = false
	I1205 20:01:12.971089  567781 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1205 20:01:12.971102  567781 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1205 20:01:12.971113  567781 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1205 20:01:12.971123  567781 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1205 20:01:12.971134  567781 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1205 20:01:12.971147  567781 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 20:01:12.971158  567781 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1205 20:01:12.971166  567781 command_runner.go:130] > # Cgroup setting for conmon
	I1205 20:01:12.971180  567781 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1205 20:01:12.971188  567781 command_runner.go:130] > conmon_cgroup = "pod"
	I1205 20:01:12.971196  567781 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1205 20:01:12.971207  567781 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1205 20:01:12.971219  567781 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 20:01:12.971227  567781 command_runner.go:130] > conmon_env = [
	I1205 20:01:12.971236  567781 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1205 20:01:12.971248  567781 command_runner.go:130] > ]
	I1205 20:01:12.971259  567781 command_runner.go:130] > # Additional environment variables to set for all the
	I1205 20:01:12.971268  567781 command_runner.go:130] > # containers. These are overridden if set in the
	I1205 20:01:12.971287  567781 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1205 20:01:12.971298  567781 command_runner.go:130] > # default_env = [
	I1205 20:01:12.971304  567781 command_runner.go:130] > # ]
	I1205 20:01:12.971315  567781 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1205 20:01:12.971334  567781 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1205 20:01:12.971344  567781 command_runner.go:130] > # selinux = false
	I1205 20:01:12.971353  567781 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1205 20:01:12.971364  567781 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1205 20:01:12.971376  567781 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1205 20:01:12.971383  567781 command_runner.go:130] > # seccomp_profile = ""
	I1205 20:01:12.971397  567781 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1205 20:01:12.971406  567781 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1205 20:01:12.971419  567781 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1205 20:01:12.971430  567781 command_runner.go:130] > # which might increase security.
	I1205 20:01:12.971437  567781 command_runner.go:130] > # This option is currently deprecated,
	I1205 20:01:12.971453  567781 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1205 20:01:12.971463  567781 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1205 20:01:12.971474  567781 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1205 20:01:12.971487  567781 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1205 20:01:12.971502  567781 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1205 20:01:12.971514  567781 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1205 20:01:12.971526  567781 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:01:12.971533  567781 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1205 20:01:12.971542  567781 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1205 20:01:12.971550  567781 command_runner.go:130] > # the cgroup blockio controller.
	I1205 20:01:12.971559  567781 command_runner.go:130] > # blockio_config_file = ""
	I1205 20:01:12.971571  567781 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1205 20:01:12.971581  567781 command_runner.go:130] > # blockio parameters.
	I1205 20:01:12.971588  567781 command_runner.go:130] > # blockio_reload = false
	I1205 20:01:12.971609  567781 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1205 20:01:12.971619  567781 command_runner.go:130] > # irqbalance daemon.
	I1205 20:01:12.971628  567781 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1205 20:01:12.971640  567781 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1205 20:01:12.971653  567781 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1205 20:01:12.971666  567781 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1205 20:01:12.971683  567781 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1205 20:01:12.971697  567781 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1205 20:01:12.971716  567781 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:01:12.971728  567781 command_runner.go:130] > # rdt_config_file = ""
	I1205 20:01:12.971737  567781 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1205 20:01:12.971745  567781 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1205 20:01:12.971769  567781 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1205 20:01:12.971779  567781 command_runner.go:130] > # separate_pull_cgroup = ""
	I1205 20:01:12.971786  567781 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1205 20:01:12.971798  567781 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1205 20:01:12.971807  567781 command_runner.go:130] > # will be added.
	I1205 20:01:12.971814  567781 command_runner.go:130] > # default_capabilities = [
	I1205 20:01:12.971824  567781 command_runner.go:130] > # 	"CHOWN",
	I1205 20:01:12.971829  567781 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1205 20:01:12.971836  567781 command_runner.go:130] > # 	"FSETID",
	I1205 20:01:12.971845  567781 command_runner.go:130] > # 	"FOWNER",
	I1205 20:01:12.971852  567781 command_runner.go:130] > # 	"SETGID",
	I1205 20:01:12.971861  567781 command_runner.go:130] > # 	"SETUID",
	I1205 20:01:12.971866  567781 command_runner.go:130] > # 	"SETPCAP",
	I1205 20:01:12.971874  567781 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1205 20:01:12.971877  567781 command_runner.go:130] > # 	"KILL",
	I1205 20:01:12.971883  567781 command_runner.go:130] > # ]
	I1205 20:01:12.971897  567781 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1205 20:01:12.971911  567781 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1205 20:01:12.971922  567781 command_runner.go:130] > # add_inheritable_capabilities = false
	I1205 20:01:12.971931  567781 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1205 20:01:12.971944  567781 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 20:01:12.971951  567781 command_runner.go:130] > default_sysctls = [
	I1205 20:01:12.971962  567781 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1205 20:01:12.971970  567781 command_runner.go:130] > ]
	I1205 20:01:12.971979  567781 command_runner.go:130] > # List of devices on the host that a
	I1205 20:01:12.971991  567781 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1205 20:01:12.972001  567781 command_runner.go:130] > # allowed_devices = [
	I1205 20:01:12.972008  567781 command_runner.go:130] > # 	"/dev/fuse",
	I1205 20:01:12.972017  567781 command_runner.go:130] > # ]
	I1205 20:01:12.972027  567781 command_runner.go:130] > # List of additional devices. specified as
	I1205 20:01:12.972041  567781 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1205 20:01:12.972050  567781 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1205 20:01:12.972060  567781 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 20:01:12.972070  567781 command_runner.go:130] > # additional_devices = [
	I1205 20:01:12.972076  567781 command_runner.go:130] > # ]
	I1205 20:01:12.972088  567781 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1205 20:01:12.972100  567781 command_runner.go:130] > # cdi_spec_dirs = [
	I1205 20:01:12.972107  567781 command_runner.go:130] > # 	"/etc/cdi",
	I1205 20:01:12.972117  567781 command_runner.go:130] > # 	"/var/run/cdi",
	I1205 20:01:12.972122  567781 command_runner.go:130] > # ]
	I1205 20:01:12.972141  567781 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1205 20:01:12.972155  567781 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1205 20:01:12.972166  567781 command_runner.go:130] > # Defaults to false.
	I1205 20:01:12.972174  567781 command_runner.go:130] > # device_ownership_from_security_context = false
	I1205 20:01:12.972187  567781 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1205 20:01:12.972199  567781 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1205 20:01:12.972206  567781 command_runner.go:130] > # hooks_dir = [
	I1205 20:01:12.972215  567781 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1205 20:01:12.972219  567781 command_runner.go:130] > # ]
	I1205 20:01:12.972228  567781 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1205 20:01:12.972242  567781 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1205 20:01:12.972253  567781 command_runner.go:130] > # its default mounts from the following two files:
	I1205 20:01:12.972258  567781 command_runner.go:130] > #
	I1205 20:01:12.972290  567781 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1205 20:01:12.972305  567781 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1205 20:01:12.972314  567781 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1205 20:01:12.972323  567781 command_runner.go:130] > #
	I1205 20:01:12.972333  567781 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1205 20:01:12.972345  567781 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1205 20:01:12.972359  567781 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1205 20:01:12.972371  567781 command_runner.go:130] > #      only add mounts it finds in this file.
	I1205 20:01:12.972379  567781 command_runner.go:130] > #
	I1205 20:01:12.972389  567781 command_runner.go:130] > # default_mounts_file = ""
	I1205 20:01:12.972400  567781 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1205 20:01:12.972415  567781 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1205 20:01:12.972422  567781 command_runner.go:130] > pids_limit = 1024
	I1205 20:01:12.972436  567781 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1205 20:01:12.972454  567781 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1205 20:01:12.972467  567781 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1205 20:01:12.972483  567781 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1205 20:01:12.972492  567781 command_runner.go:130] > # log_size_max = -1
	I1205 20:01:12.972504  567781 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1205 20:01:12.972515  567781 command_runner.go:130] > # log_to_journald = false
	I1205 20:01:12.972525  567781 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1205 20:01:12.972533  567781 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1205 20:01:12.972549  567781 command_runner.go:130] > # Path to directory for container attach sockets.
	I1205 20:01:12.972561  567781 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1205 20:01:12.972570  567781 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1205 20:01:12.972579  567781 command_runner.go:130] > # bind_mount_prefix = ""
	I1205 20:01:12.972588  567781 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1205 20:01:12.972596  567781 command_runner.go:130] > # read_only = false
	I1205 20:01:12.972602  567781 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1205 20:01:12.972615  567781 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1205 20:01:12.972625  567781 command_runner.go:130] > # live configuration reload.
	I1205 20:01:12.972632  567781 command_runner.go:130] > # log_level = "info"
	I1205 20:01:12.972643  567781 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1205 20:01:12.972654  567781 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:01:12.972661  567781 command_runner.go:130] > # log_filter = ""
	I1205 20:01:12.972693  567781 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1205 20:01:12.972714  567781 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1205 20:01:12.972724  567781 command_runner.go:130] > # separated by comma.
	I1205 20:01:12.972740  567781 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 20:01:12.972750  567781 command_runner.go:130] > # uid_mappings = ""
	I1205 20:01:12.972759  567781 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1205 20:01:12.972772  567781 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1205 20:01:12.972788  567781 command_runner.go:130] > # separated by comma.
	I1205 20:01:12.972800  567781 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 20:01:12.972811  567781 command_runner.go:130] > # gid_mappings = ""
	I1205 20:01:12.972821  567781 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1205 20:01:12.972835  567781 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 20:01:12.972848  567781 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 20:01:12.972863  567781 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 20:01:12.972874  567781 command_runner.go:130] > # minimum_mappable_uid = -1
	I1205 20:01:12.972882  567781 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1205 20:01:12.972891  567781 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 20:01:12.972901  567781 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 20:01:12.972918  567781 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 20:01:12.972925  567781 command_runner.go:130] > # minimum_mappable_gid = -1
	I1205 20:01:12.972939  567781 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1205 20:01:12.972951  567781 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1205 20:01:12.972963  567781 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1205 20:01:12.972978  567781 command_runner.go:130] > # ctr_stop_timeout = 30
	I1205 20:01:12.972987  567781 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1205 20:01:12.972995  567781 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1205 20:01:12.973006  567781 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1205 20:01:12.973014  567781 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1205 20:01:12.973024  567781 command_runner.go:130] > drop_infra_ctr = false
	I1205 20:01:12.973033  567781 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1205 20:01:12.973045  567781 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1205 20:01:12.973061  567781 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1205 20:01:12.973071  567781 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1205 20:01:12.973083  567781 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1205 20:01:12.973096  567781 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1205 20:01:12.973106  567781 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1205 20:01:12.973118  567781 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1205 20:01:12.973125  567781 command_runner.go:130] > # shared_cpuset = ""
	I1205 20:01:12.973138  567781 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1205 20:01:12.973149  567781 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1205 20:01:12.973157  567781 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1205 20:01:12.973170  567781 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1205 20:01:12.973181  567781 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1205 20:01:12.973196  567781 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1205 20:01:12.973206  567781 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1205 20:01:12.973217  567781 command_runner.go:130] > # enable_criu_support = false
	I1205 20:01:12.973225  567781 command_runner.go:130] > # Enable/disable the generation of the container,
	I1205 20:01:12.973241  567781 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1205 20:01:12.973252  567781 command_runner.go:130] > # enable_pod_events = false
	I1205 20:01:12.973259  567781 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1205 20:01:12.973269  567781 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1205 20:01:12.973287  567781 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1205 20:01:12.973297  567781 command_runner.go:130] > # default_runtime = "runc"
	I1205 20:01:12.973306  567781 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1205 20:01:12.973320  567781 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1205 20:01:12.973335  567781 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1205 20:01:12.973346  567781 command_runner.go:130] > # creation as a file is not desired either.
	I1205 20:01:12.973359  567781 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1205 20:01:12.973374  567781 command_runner.go:130] > # the hostname is being managed dynamically.
	I1205 20:01:12.973384  567781 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1205 20:01:12.973388  567781 command_runner.go:130] > # ]
	I1205 20:01:12.973399  567781 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1205 20:01:12.973412  567781 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1205 20:01:12.973425  567781 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1205 20:01:12.973437  567781 command_runner.go:130] > # Each entry in the table should follow the format:
	I1205 20:01:12.973446  567781 command_runner.go:130] > #
	I1205 20:01:12.973454  567781 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1205 20:01:12.973464  567781 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1205 20:01:12.973490  567781 command_runner.go:130] > # runtime_type = "oci"
	I1205 20:01:12.973500  567781 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1205 20:01:12.973506  567781 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1205 20:01:12.973512  567781 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1205 20:01:12.973524  567781 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1205 20:01:12.973536  567781 command_runner.go:130] > # monitor_env = []
	I1205 20:01:12.973547  567781 command_runner.go:130] > # privileged_without_host_devices = false
	I1205 20:01:12.973554  567781 command_runner.go:130] > # allowed_annotations = []
	I1205 20:01:12.973566  567781 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1205 20:01:12.973575  567781 command_runner.go:130] > # Where:
	I1205 20:01:12.973583  567781 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1205 20:01:12.973598  567781 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1205 20:01:12.973612  567781 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1205 20:01:12.973622  567781 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1205 20:01:12.973632  567781 command_runner.go:130] > #   in $PATH.
	I1205 20:01:12.973642  567781 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1205 20:01:12.973652  567781 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1205 20:01:12.973662  567781 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1205 20:01:12.973669  567781 command_runner.go:130] > #   state.
	I1205 20:01:12.973677  567781 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1205 20:01:12.973691  567781 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1205 20:01:12.973705  567781 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1205 20:01:12.973713  567781 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1205 20:01:12.973726  567781 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1205 20:01:12.973739  567781 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1205 20:01:12.973750  567781 command_runner.go:130] > #   The currently recognized values are:
	I1205 20:01:12.973757  567781 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1205 20:01:12.973770  567781 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1205 20:01:12.973789  567781 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1205 20:01:12.973801  567781 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1205 20:01:12.973813  567781 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1205 20:01:12.973825  567781 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1205 20:01:12.973838  567781 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1205 20:01:12.973847  567781 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1205 20:01:12.973858  567781 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1205 20:01:12.973872  567781 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1205 20:01:12.973882  567781 command_runner.go:130] > #   deprecated option "conmon".
	I1205 20:01:12.973893  567781 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1205 20:01:12.973906  567781 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1205 20:01:12.973917  567781 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1205 20:01:12.973926  567781 command_runner.go:130] > #   should be moved to the container's cgroup
	I1205 20:01:12.973933  567781 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1205 20:01:12.973944  567781 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1205 20:01:12.973959  567781 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1205 20:01:12.973972  567781 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1205 20:01:12.973980  567781 command_runner.go:130] > #
	I1205 20:01:12.973988  567781 command_runner.go:130] > # Using the seccomp notifier feature:
	I1205 20:01:12.973995  567781 command_runner.go:130] > #
	I1205 20:01:12.974005  567781 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1205 20:01:12.974017  567781 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1205 20:01:12.974021  567781 command_runner.go:130] > #
	I1205 20:01:12.974035  567781 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1205 20:01:12.974049  567781 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1205 20:01:12.974057  567781 command_runner.go:130] > #
	I1205 20:01:12.974066  567781 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1205 20:01:12.974075  567781 command_runner.go:130] > # feature.
	I1205 20:01:12.974081  567781 command_runner.go:130] > #
	I1205 20:01:12.974094  567781 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1205 20:01:12.974102  567781 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1205 20:01:12.974113  567781 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1205 20:01:12.974127  567781 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1205 20:01:12.974140  567781 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1205 20:01:12.974148  567781 command_runner.go:130] > #
	I1205 20:01:12.974157  567781 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1205 20:01:12.974173  567781 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1205 20:01:12.974181  567781 command_runner.go:130] > #
	I1205 20:01:12.974188  567781 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1205 20:01:12.974194  567781 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1205 20:01:12.974202  567781 command_runner.go:130] > #
	I1205 20:01:12.974213  567781 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1205 20:01:12.974226  567781 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1205 20:01:12.974237  567781 command_runner.go:130] > # limitation.
	I1205 20:01:12.974248  567781 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1205 20:01:12.974257  567781 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1205 20:01:12.974266  567781 command_runner.go:130] > runtime_type = "oci"
	I1205 20:01:12.974272  567781 command_runner.go:130] > runtime_root = "/run/runc"
	I1205 20:01:12.974281  567781 command_runner.go:130] > runtime_config_path = ""
	I1205 20:01:12.974292  567781 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1205 20:01:12.974302  567781 command_runner.go:130] > monitor_cgroup = "pod"
	I1205 20:01:12.974312  567781 command_runner.go:130] > monitor_exec_cgroup = ""
	I1205 20:01:12.974319  567781 command_runner.go:130] > monitor_env = [
	I1205 20:01:12.974327  567781 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1205 20:01:12.974336  567781 command_runner.go:130] > ]
	I1205 20:01:12.974344  567781 command_runner.go:130] > privileged_without_host_devices = false
	I1205 20:01:12.974356  567781 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1205 20:01:12.974362  567781 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1205 20:01:12.974368  567781 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1205 20:01:12.974376  567781 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1205 20:01:12.974385  567781 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1205 20:01:12.974395  567781 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1205 20:01:12.974413  567781 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1205 20:01:12.974427  567781 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1205 20:01:12.974440  567781 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1205 20:01:12.974453  567781 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1205 20:01:12.974461  567781 command_runner.go:130] > # Example:
	I1205 20:01:12.974466  567781 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1205 20:01:12.974471  567781 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1205 20:01:12.974476  567781 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1205 20:01:12.974480  567781 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1205 20:01:12.974485  567781 command_runner.go:130] > # cpuset = 0
	I1205 20:01:12.974489  567781 command_runner.go:130] > # cpushares = "0-1"
	I1205 20:01:12.974492  567781 command_runner.go:130] > # Where:
	I1205 20:01:12.974500  567781 command_runner.go:130] > # The workload name is workload-type.
	I1205 20:01:12.974506  567781 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1205 20:01:12.974512  567781 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1205 20:01:12.974518  567781 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1205 20:01:12.974525  567781 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1205 20:01:12.974530  567781 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1205 20:01:12.974537  567781 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1205 20:01:12.974547  567781 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1205 20:01:12.974555  567781 command_runner.go:130] > # Default value is set to true
	I1205 20:01:12.974562  567781 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1205 20:01:12.974571  567781 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1205 20:01:12.974579  567781 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1205 20:01:12.974586  567781 command_runner.go:130] > # Default value is set to 'false'
	I1205 20:01:12.974593  567781 command_runner.go:130] > # disable_hostport_mapping = false
	I1205 20:01:12.974602  567781 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1205 20:01:12.974607  567781 command_runner.go:130] > #
	I1205 20:01:12.974614  567781 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1205 20:01:12.974620  567781 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1205 20:01:12.974625  567781 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1205 20:01:12.974631  567781 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1205 20:01:12.974636  567781 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1205 20:01:12.974639  567781 command_runner.go:130] > [crio.image]
	I1205 20:01:12.974644  567781 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1205 20:01:12.974650  567781 command_runner.go:130] > # default_transport = "docker://"
	I1205 20:01:12.974656  567781 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1205 20:01:12.974662  567781 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1205 20:01:12.974665  567781 command_runner.go:130] > # global_auth_file = ""
	I1205 20:01:12.974671  567781 command_runner.go:130] > # The image used to instantiate infra containers.
	I1205 20:01:12.974679  567781 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:01:12.974683  567781 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1205 20:01:12.974691  567781 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1205 20:01:12.974696  567781 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1205 20:01:12.974702  567781 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:01:12.974706  567781 command_runner.go:130] > # pause_image_auth_file = ""
	I1205 20:01:12.974711  567781 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1205 20:01:12.974721  567781 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1205 20:01:12.974730  567781 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1205 20:01:12.974738  567781 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1205 20:01:12.974742  567781 command_runner.go:130] > # pause_command = "/pause"
	I1205 20:01:12.974750  567781 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1205 20:01:12.974756  567781 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1205 20:01:12.974767  567781 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1205 20:01:12.974780  567781 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1205 20:01:12.974790  567781 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1205 20:01:12.974796  567781 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1205 20:01:12.974802  567781 command_runner.go:130] > # pinned_images = [
	I1205 20:01:12.974806  567781 command_runner.go:130] > # ]
	I1205 20:01:12.974812  567781 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1205 20:01:12.974820  567781 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1205 20:01:12.974826  567781 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1205 20:01:12.974834  567781 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1205 20:01:12.974839  567781 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1205 20:01:12.974843  567781 command_runner.go:130] > # signature_policy = ""
	I1205 20:01:12.974848  567781 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1205 20:01:12.974855  567781 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1205 20:01:12.974863  567781 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1205 20:01:12.974869  567781 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1205 20:01:12.974878  567781 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1205 20:01:12.974883  567781 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1205 20:01:12.974891  567781 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1205 20:01:12.974896  567781 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1205 20:01:12.974903  567781 command_runner.go:130] > # changing them here.
	I1205 20:01:12.974907  567781 command_runner.go:130] > # insecure_registries = [
	I1205 20:01:12.974910  567781 command_runner.go:130] > # ]
	I1205 20:01:12.974916  567781 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1205 20:01:12.974923  567781 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1205 20:01:12.974927  567781 command_runner.go:130] > # image_volumes = "mkdir"
	I1205 20:01:12.974932  567781 command_runner.go:130] > # Temporary directory to use for storing big files
	I1205 20:01:12.974940  567781 command_runner.go:130] > # big_files_temporary_dir = ""
	I1205 20:01:12.974945  567781 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1205 20:01:12.974952  567781 command_runner.go:130] > # CNI plugins.
	I1205 20:01:12.974955  567781 command_runner.go:130] > [crio.network]
	I1205 20:01:12.974960  567781 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1205 20:01:12.974969  567781 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1205 20:01:12.974976  567781 command_runner.go:130] > # cni_default_network = ""
	I1205 20:01:12.974981  567781 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1205 20:01:12.974988  567781 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1205 20:01:12.974993  567781 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1205 20:01:12.975003  567781 command_runner.go:130] > # plugin_dirs = [
	I1205 20:01:12.975006  567781 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1205 20:01:12.975010  567781 command_runner.go:130] > # ]
	I1205 20:01:12.975015  567781 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1205 20:01:12.975022  567781 command_runner.go:130] > [crio.metrics]
	I1205 20:01:12.975027  567781 command_runner.go:130] > # Globally enable or disable metrics support.
	I1205 20:01:12.975034  567781 command_runner.go:130] > enable_metrics = true
	I1205 20:01:12.975039  567781 command_runner.go:130] > # Specify enabled metrics collectors.
	I1205 20:01:12.975046  567781 command_runner.go:130] > # Per default all metrics are enabled.
	I1205 20:01:12.975052  567781 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1205 20:01:12.975060  567781 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1205 20:01:12.975065  567781 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1205 20:01:12.975071  567781 command_runner.go:130] > # metrics_collectors = [
	I1205 20:01:12.975075  567781 command_runner.go:130] > # 	"operations",
	I1205 20:01:12.975080  567781 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1205 20:01:12.975084  567781 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1205 20:01:12.975088  567781 command_runner.go:130] > # 	"operations_errors",
	I1205 20:01:12.975092  567781 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1205 20:01:12.975097  567781 command_runner.go:130] > # 	"image_pulls_by_name",
	I1205 20:01:12.975101  567781 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1205 20:01:12.975106  567781 command_runner.go:130] > # 	"image_pulls_failures",
	I1205 20:01:12.975111  567781 command_runner.go:130] > # 	"image_pulls_successes",
	I1205 20:01:12.975118  567781 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1205 20:01:12.975122  567781 command_runner.go:130] > # 	"image_layer_reuse",
	I1205 20:01:12.975129  567781 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1205 20:01:12.975133  567781 command_runner.go:130] > # 	"containers_oom_total",
	I1205 20:01:12.975139  567781 command_runner.go:130] > # 	"containers_oom",
	I1205 20:01:12.975143  567781 command_runner.go:130] > # 	"processes_defunct",
	I1205 20:01:12.975147  567781 command_runner.go:130] > # 	"operations_total",
	I1205 20:01:12.975154  567781 command_runner.go:130] > # 	"operations_latency_seconds",
	I1205 20:01:12.975167  567781 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1205 20:01:12.975174  567781 command_runner.go:130] > # 	"operations_errors_total",
	I1205 20:01:12.975181  567781 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1205 20:01:12.975185  567781 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1205 20:01:12.975191  567781 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1205 20:01:12.975197  567781 command_runner.go:130] > # 	"image_pulls_success_total",
	I1205 20:01:12.975203  567781 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1205 20:01:12.975207  567781 command_runner.go:130] > # 	"containers_oom_count_total",
	I1205 20:01:12.975214  567781 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1205 20:01:12.975219  567781 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1205 20:01:12.975227  567781 command_runner.go:130] > # ]
	I1205 20:01:12.975232  567781 command_runner.go:130] > # The port on which the metrics server will listen.
	I1205 20:01:12.975238  567781 command_runner.go:130] > # metrics_port = 9090
	I1205 20:01:12.975243  567781 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1205 20:01:12.975249  567781 command_runner.go:130] > # metrics_socket = ""
	I1205 20:01:12.975254  567781 command_runner.go:130] > # The certificate for the secure metrics server.
	I1205 20:01:12.975259  567781 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1205 20:01:12.975266  567781 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1205 20:01:12.975270  567781 command_runner.go:130] > # certificate on any modification event.
	I1205 20:01:12.975282  567781 command_runner.go:130] > # metrics_cert = ""
	I1205 20:01:12.975290  567781 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1205 20:01:12.975295  567781 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1205 20:01:12.975302  567781 command_runner.go:130] > # metrics_key = ""
	I1205 20:01:12.975307  567781 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1205 20:01:12.975313  567781 command_runner.go:130] > [crio.tracing]
	I1205 20:01:12.975318  567781 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1205 20:01:12.975323  567781 command_runner.go:130] > # enable_tracing = false
	I1205 20:01:12.975332  567781 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1205 20:01:12.975336  567781 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1205 20:01:12.975343  567781 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1205 20:01:12.975350  567781 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1205 20:01:12.975354  567781 command_runner.go:130] > # CRI-O NRI configuration.
	I1205 20:01:12.975358  567781 command_runner.go:130] > [crio.nri]
	I1205 20:01:12.975363  567781 command_runner.go:130] > # Globally enable or disable NRI.
	I1205 20:01:12.975369  567781 command_runner.go:130] > # enable_nri = false
	I1205 20:01:12.975374  567781 command_runner.go:130] > # NRI socket to listen on.
	I1205 20:01:12.975380  567781 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1205 20:01:12.975384  567781 command_runner.go:130] > # NRI plugin directory to use.
	I1205 20:01:12.975389  567781 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1205 20:01:12.975394  567781 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1205 20:01:12.975401  567781 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1205 20:01:12.975406  567781 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1205 20:01:12.975412  567781 command_runner.go:130] > # nri_disable_connections = false
	I1205 20:01:12.975417  567781 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1205 20:01:12.975424  567781 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1205 20:01:12.975428  567781 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1205 20:01:12.975433  567781 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1205 20:01:12.975438  567781 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1205 20:01:12.975444  567781 command_runner.go:130] > [crio.stats]
	I1205 20:01:12.975450  567781 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1205 20:01:12.975455  567781 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1205 20:01:12.975460  567781 command_runner.go:130] > # stats_collection_period = 0
	I1205 20:01:12.975641  567781 command_runner.go:130] ! time="2024-12-05 20:01:12.938494287Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1205 20:01:12.975671  567781 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1205 20:01:12.975800  567781 cni.go:84] Creating CNI manager for ""
	I1205 20:01:12.975811  567781 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1205 20:01:12.975821  567781 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:01:12.975846  567781 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.170 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-346389 NodeName:multinode-346389 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:01:12.976001  567781 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.170
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-346389"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.170"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.170"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:01:12.976078  567781 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:01:12.986319  567781 command_runner.go:130] > kubeadm
	I1205 20:01:12.986344  567781 command_runner.go:130] > kubectl
	I1205 20:01:12.986351  567781 command_runner.go:130] > kubelet
	I1205 20:01:12.986493  567781 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:01:12.986558  567781 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:01:12.996649  567781 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1205 20:01:13.016182  567781 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:01:13.035572  567781 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2296 bytes)
	I1205 20:01:13.055929  567781 ssh_runner.go:195] Run: grep 192.168.39.170	control-plane.minikube.internal$ /etc/hosts
	I1205 20:01:13.060107  567781 command_runner.go:130] > 192.168.39.170	control-plane.minikube.internal
	I1205 20:01:13.060190  567781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:01:13.204330  567781 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:01:13.219375  567781 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/multinode-346389 for IP: 192.168.39.170
	I1205 20:01:13.219404  567781 certs.go:194] generating shared ca certs ...
	I1205 20:01:13.219425  567781 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:01:13.219672  567781 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:01:13.219721  567781 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:01:13.219736  567781 certs.go:256] generating profile certs ...
	I1205 20:01:13.219845  567781 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/multinode-346389/client.key
	I1205 20:01:13.219936  567781 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/multinode-346389/apiserver.key.a99a356c
	I1205 20:01:13.219995  567781 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/multinode-346389/proxy-client.key
	I1205 20:01:13.220011  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 20:01:13.220030  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 20:01:13.220059  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 20:01:13.220076  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 20:01:13.220093  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/multinode-346389/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 20:01:13.220112  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/multinode-346389/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 20:01:13.220131  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/multinode-346389/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 20:01:13.220153  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/multinode-346389/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 20:01:13.220233  567781 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:01:13.220308  567781 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:01:13.220323  567781 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:01:13.220360  567781 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:01:13.220395  567781 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:01:13.220427  567781 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:01:13.220481  567781 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:01:13.220524  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> /usr/share/ca-certificates/5381862.pem
	I1205 20:01:13.220544  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:01:13.220562  567781 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem -> /usr/share/ca-certificates/538186.pem
	I1205 20:01:13.221193  567781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:01:13.246407  567781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:01:13.270522  567781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:01:13.296740  567781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:01:13.321632  567781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/multinode-346389/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 20:01:13.346268  567781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/multinode-346389/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:01:13.370792  567781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/multinode-346389/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:01:13.396591  567781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/multinode-346389/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:01:13.544785  567781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:01:13.646833  567781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:01:13.807609  567781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:01:13.935968  567781 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:01:14.003409  567781 ssh_runner.go:195] Run: openssl version
	I1205 20:01:14.023379  567781 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1205 20:01:14.023465  567781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:01:14.051019  567781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:01:14.057942  567781 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:01:14.058417  567781 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:01:14.058486  567781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:01:14.069192  567781 command_runner.go:130] > 3ec20f2e
	I1205 20:01:14.069278  567781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:01:14.082196  567781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:01:14.098540  567781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:01:14.107979  567781 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:01:14.108170  567781 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:01:14.108224  567781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:01:14.113820  567781 command_runner.go:130] > b5213941
	I1205 20:01:14.114142  567781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:01:14.123859  567781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:01:14.134928  567781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:01:14.139577  567781 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:01:14.139616  567781 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:01:14.139665  567781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:01:14.145926  567781 command_runner.go:130] > 51391683
	I1205 20:01:14.146059  567781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:01:14.155560  567781 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:01:14.160200  567781 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:01:14.160229  567781 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1205 20:01:14.160238  567781 command_runner.go:130] > Device: 253,1	Inode: 3150382     Links: 1
	I1205 20:01:14.160248  567781 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 20:01:14.160262  567781 command_runner.go:130] > Access: 2024-12-05 19:54:21.782970096 +0000
	I1205 20:01:14.160287  567781 command_runner.go:130] > Modify: 2024-12-05 19:54:21.782970096 +0000
	I1205 20:01:14.160303  567781 command_runner.go:130] > Change: 2024-12-05 19:54:21.782970096 +0000
	I1205 20:01:14.160310  567781 command_runner.go:130] >  Birth: 2024-12-05 19:54:21.782970096 +0000
	I1205 20:01:14.160355  567781 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:01:14.165997  567781 command_runner.go:130] > Certificate will not expire
	I1205 20:01:14.166326  567781 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:01:14.171890  567781 command_runner.go:130] > Certificate will not expire
	I1205 20:01:14.171968  567781 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:01:14.177472  567781 command_runner.go:130] > Certificate will not expire
	I1205 20:01:14.177558  567781 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:01:14.183215  567781 command_runner.go:130] > Certificate will not expire
	I1205 20:01:14.183288  567781 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:01:14.188896  567781 command_runner.go:130] > Certificate will not expire
	I1205 20:01:14.189185  567781 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:01:14.195039  567781 command_runner.go:130] > Certificate will not expire
	I1205 20:01:14.195129  567781 kubeadm.go:392] StartCluster: {Name:multinode-346389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-346389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.26 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.125 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:01:14.195286  567781 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:01:14.195349  567781 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:01:14.232179  567781 command_runner.go:130] > c5fd89a18bbaa2140d0b8094df5b9d8dd429cd9b3bdaed26f464b78c576e7189
	I1205 20:01:14.232206  567781 command_runner.go:130] > 2e630b6753b94f0df8dd67b2fcc1bfbb5f7761f3bc9ab72dbd462e14a8378dc4
	I1205 20:01:14.232212  567781 command_runner.go:130] > ce5efd3787e17ba8738a5427baf7eea1ee5b7a8f938bb47c7abed371e5a603f7
	I1205 20:01:14.232219  567781 command_runner.go:130] > a666ed3405a3e13cf20fbd8dbac45816e904954b4ebc68fb8a1b80fd282284c8
	I1205 20:01:14.232224  567781 command_runner.go:130] > c897d3cc7ee00c86db7cd1e6bbc9eb3ea765742ebfa242a0ce8cce78952a7dde
	I1205 20:01:14.232229  567781 command_runner.go:130] > 86a636d2da85280e9a07cfc40c5efed5746d692941b52ada3d03aaf858d8a23c
	I1205 20:01:14.232235  567781 command_runner.go:130] > 8653657853de98aba7582b8f54f8e70b9afd24b32764929281d4e662609b8d11
	I1205 20:01:14.232248  567781 command_runner.go:130] > 6163a5b6d362dde00b1ce847200a6ca36c7b3c15cf8f30ebe3efe3a224b3fe1a
	I1205 20:01:14.232289  567781 cri.go:89] found id: "c5fd89a18bbaa2140d0b8094df5b9d8dd429cd9b3bdaed26f464b78c576e7189"
	I1205 20:01:14.232302  567781 cri.go:89] found id: "2e630b6753b94f0df8dd67b2fcc1bfbb5f7761f3bc9ab72dbd462e14a8378dc4"
	I1205 20:01:14.232309  567781 cri.go:89] found id: "ce5efd3787e17ba8738a5427baf7eea1ee5b7a8f938bb47c7abed371e5a603f7"
	I1205 20:01:14.232324  567781 cri.go:89] found id: "a666ed3405a3e13cf20fbd8dbac45816e904954b4ebc68fb8a1b80fd282284c8"
	I1205 20:01:14.232333  567781 cri.go:89] found id: "c897d3cc7ee00c86db7cd1e6bbc9eb3ea765742ebfa242a0ce8cce78952a7dde"
	I1205 20:01:14.232338  567781 cri.go:89] found id: "86a636d2da85280e9a07cfc40c5efed5746d692941b52ada3d03aaf858d8a23c"
	I1205 20:01:14.232346  567781 cri.go:89] found id: "8653657853de98aba7582b8f54f8e70b9afd24b32764929281d4e662609b8d11"
	I1205 20:01:14.232351  567781 cri.go:89] found id: "6163a5b6d362dde00b1ce847200a6ca36c7b3c15cf8f30ebe3efe3a224b3fe1a"
	I1205 20:01:14.232358  567781 cri.go:89] found id: ""
	I1205 20:01:14.232405  567781 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-346389 -n multinode-346389
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-346389 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (145.65s)

                                                
                                    
x
+
TestPreload (177.08s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-572068 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1205 20:10:51.381487  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-572068 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m33.690946891s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-572068 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-572068 image pull gcr.io/k8s-minikube/busybox: (3.370713728s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-572068
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-572068: (7.294645806s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-572068 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-572068 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m9.529222201s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-572068 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-12-05 20:12:12.831825198 +0000 UTC m=+4225.037445520
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-572068 -n test-preload-572068
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-572068 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-572068 logs -n 25: (1.131557083s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-346389 ssh -n                                                                 | multinode-346389     | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | multinode-346389-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-346389 ssh -n multinode-346389 sudo cat                                       | multinode-346389     | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | /home/docker/cp-test_multinode-346389-m03_multinode-346389.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-346389 cp multinode-346389-m03:/home/docker/cp-test.txt                       | multinode-346389     | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | multinode-346389-m02:/home/docker/cp-test_multinode-346389-m03_multinode-346389-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-346389 ssh -n                                                                 | multinode-346389     | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | multinode-346389-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-346389 ssh -n multinode-346389-m02 sudo cat                                   | multinode-346389     | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	|         | /home/docker/cp-test_multinode-346389-m03_multinode-346389-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-346389 node stop m03                                                          | multinode-346389     | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:56 UTC |
	| node    | multinode-346389 node start                                                             | multinode-346389     | jenkins | v1.34.0 | 05 Dec 24 19:56 UTC | 05 Dec 24 19:57 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-346389                                                                | multinode-346389     | jenkins | v1.34.0 | 05 Dec 24 19:57 UTC |                     |
	| stop    | -p multinode-346389                                                                     | multinode-346389     | jenkins | v1.34.0 | 05 Dec 24 19:57 UTC |                     |
	| start   | -p multinode-346389                                                                     | multinode-346389     | jenkins | v1.34.0 | 05 Dec 24 19:59 UTC | 05 Dec 24 20:03 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-346389                                                                | multinode-346389     | jenkins | v1.34.0 | 05 Dec 24 20:03 UTC |                     |
	| node    | multinode-346389 node delete                                                            | multinode-346389     | jenkins | v1.34.0 | 05 Dec 24 20:03 UTC | 05 Dec 24 20:03 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-346389 stop                                                                   | multinode-346389     | jenkins | v1.34.0 | 05 Dec 24 20:03 UTC |                     |
	| start   | -p multinode-346389                                                                     | multinode-346389     | jenkins | v1.34.0 | 05 Dec 24 20:05 UTC | 05 Dec 24 20:08 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-346389                                                                | multinode-346389     | jenkins | v1.34.0 | 05 Dec 24 20:08 UTC |                     |
	| start   | -p multinode-346389-m02                                                                 | multinode-346389-m02 | jenkins | v1.34.0 | 05 Dec 24 20:08 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-346389-m03                                                                 | multinode-346389-m03 | jenkins | v1.34.0 | 05 Dec 24 20:08 UTC | 05 Dec 24 20:09 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-346389                                                                 | multinode-346389     | jenkins | v1.34.0 | 05 Dec 24 20:09 UTC |                     |
	| delete  | -p multinode-346389-m03                                                                 | multinode-346389-m03 | jenkins | v1.34.0 | 05 Dec 24 20:09 UTC | 05 Dec 24 20:09 UTC |
	| delete  | -p multinode-346389                                                                     | multinode-346389     | jenkins | v1.34.0 | 05 Dec 24 20:09 UTC | 05 Dec 24 20:09 UTC |
	| start   | -p test-preload-572068                                                                  | test-preload-572068  | jenkins | v1.34.0 | 05 Dec 24 20:09 UTC | 05 Dec 24 20:10 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-572068 image pull                                                          | test-preload-572068  | jenkins | v1.34.0 | 05 Dec 24 20:10 UTC | 05 Dec 24 20:10 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-572068                                                                  | test-preload-572068  | jenkins | v1.34.0 | 05 Dec 24 20:10 UTC | 05 Dec 24 20:11 UTC |
	| start   | -p test-preload-572068                                                                  | test-preload-572068  | jenkins | v1.34.0 | 05 Dec 24 20:11 UTC | 05 Dec 24 20:12 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-572068 image list                                                          | test-preload-572068  | jenkins | v1.34.0 | 05 Dec 24 20:12 UTC | 05 Dec 24 20:12 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 20:11:03
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:11:03.123251  572141 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:11:03.123384  572141 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:11:03.123394  572141 out.go:358] Setting ErrFile to fd 2...
	I1205 20:11:03.123402  572141 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:11:03.123616  572141 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 20:11:03.124188  572141 out.go:352] Setting JSON to false
	I1205 20:11:03.125175  572141 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":10409,"bootTime":1733419054,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:11:03.125291  572141 start.go:139] virtualization: kvm guest
	I1205 20:11:03.127592  572141 out.go:177] * [test-preload-572068] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:11:03.129465  572141 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 20:11:03.129463  572141 notify.go:220] Checking for updates...
	I1205 20:11:03.130926  572141 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:11:03.132453  572141 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:11:03.133776  572141 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 20:11:03.135167  572141 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:11:03.136335  572141 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:11:03.137863  572141 config.go:182] Loaded profile config "test-preload-572068": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1205 20:11:03.138302  572141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:11:03.138363  572141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:11:03.154171  572141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33351
	I1205 20:11:03.154745  572141 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:11:03.155321  572141 main.go:141] libmachine: Using API Version  1
	I1205 20:11:03.155342  572141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:11:03.155692  572141 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:11:03.155880  572141 main.go:141] libmachine: (test-preload-572068) Calling .DriverName
	I1205 20:11:03.157731  572141 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1205 20:11:03.159094  572141 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:11:03.159399  572141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:11:03.159439  572141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:11:03.174272  572141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45473
	I1205 20:11:03.174751  572141 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:11:03.175276  572141 main.go:141] libmachine: Using API Version  1
	I1205 20:11:03.175311  572141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:11:03.175627  572141 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:11:03.175838  572141 main.go:141] libmachine: (test-preload-572068) Calling .DriverName
	I1205 20:11:03.210279  572141 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 20:11:03.211812  572141 start.go:297] selected driver: kvm2
	I1205 20:11:03.211832  572141 start.go:901] validating driver "kvm2" against &{Name:test-preload-572068 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-572068 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:11:03.211940  572141 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:11:03.212780  572141 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:11:03.212856  572141 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20052-530897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:11:03.228237  572141 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 20:11:03.228638  572141 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:11:03.228678  572141 cni.go:84] Creating CNI manager for ""
	I1205 20:11:03.228733  572141 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:11:03.228797  572141 start.go:340] cluster config:
	{Name:test-preload-572068 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-572068 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:11:03.228941  572141 iso.go:125] acquiring lock: {Name:mk778929df466edaca8cb6d38427acedfae32b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:11:03.230958  572141 out.go:177] * Starting "test-preload-572068" primary control-plane node in "test-preload-572068" cluster
	I1205 20:11:03.232526  572141 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1205 20:11:03.344398  572141 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1205 20:11:03.344436  572141 cache.go:56] Caching tarball of preloaded images
	I1205 20:11:03.344619  572141 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1205 20:11:03.346620  572141 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I1205 20:11:03.348177  572141 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1205 20:11:03.459019  572141 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1205 20:11:15.674887  572141 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1205 20:11:15.674981  572141 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1205 20:11:16.544318  572141 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I1205 20:11:16.544469  572141 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/test-preload-572068/config.json ...
	I1205 20:11:16.544707  572141 start.go:360] acquireMachinesLock for test-preload-572068: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:11:16.544780  572141 start.go:364] duration metric: took 49.653µs to acquireMachinesLock for "test-preload-572068"
	I1205 20:11:16.544803  572141 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:11:16.544811  572141 fix.go:54] fixHost starting: 
	I1205 20:11:16.545089  572141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:11:16.545133  572141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:11:16.560209  572141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35089
	I1205 20:11:16.560715  572141 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:11:16.561241  572141 main.go:141] libmachine: Using API Version  1
	I1205 20:11:16.561259  572141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:11:16.561576  572141 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:11:16.561845  572141 main.go:141] libmachine: (test-preload-572068) Calling .DriverName
	I1205 20:11:16.561978  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetState
	I1205 20:11:16.563673  572141 fix.go:112] recreateIfNeeded on test-preload-572068: state=Stopped err=<nil>
	I1205 20:11:16.563709  572141 main.go:141] libmachine: (test-preload-572068) Calling .DriverName
	W1205 20:11:16.563888  572141 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 20:11:16.566131  572141 out.go:177] * Restarting existing kvm2 VM for "test-preload-572068" ...
	I1205 20:11:16.567485  572141 main.go:141] libmachine: (test-preload-572068) Calling .Start
	I1205 20:11:16.567704  572141 main.go:141] libmachine: (test-preload-572068) Ensuring networks are active...
	I1205 20:11:16.568584  572141 main.go:141] libmachine: (test-preload-572068) Ensuring network default is active
	I1205 20:11:16.568912  572141 main.go:141] libmachine: (test-preload-572068) Ensuring network mk-test-preload-572068 is active
	I1205 20:11:16.569267  572141 main.go:141] libmachine: (test-preload-572068) Getting domain xml...
	I1205 20:11:16.569965  572141 main.go:141] libmachine: (test-preload-572068) Creating domain...
	I1205 20:11:17.782718  572141 main.go:141] libmachine: (test-preload-572068) Waiting to get IP...
	I1205 20:11:17.783590  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:17.784076  572141 main.go:141] libmachine: (test-preload-572068) DBG | unable to find current IP address of domain test-preload-572068 in network mk-test-preload-572068
	I1205 20:11:17.784138  572141 main.go:141] libmachine: (test-preload-572068) DBG | I1205 20:11:17.784053  572223 retry.go:31] will retry after 224.382302ms: waiting for machine to come up
	I1205 20:11:18.010602  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:18.011045  572141 main.go:141] libmachine: (test-preload-572068) DBG | unable to find current IP address of domain test-preload-572068 in network mk-test-preload-572068
	I1205 20:11:18.011079  572141 main.go:141] libmachine: (test-preload-572068) DBG | I1205 20:11:18.011021  572223 retry.go:31] will retry after 310.659077ms: waiting for machine to come up
	I1205 20:11:18.323608  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:18.324098  572141 main.go:141] libmachine: (test-preload-572068) DBG | unable to find current IP address of domain test-preload-572068 in network mk-test-preload-572068
	I1205 20:11:18.324119  572141 main.go:141] libmachine: (test-preload-572068) DBG | I1205 20:11:18.324068  572223 retry.go:31] will retry after 416.932511ms: waiting for machine to come up
	I1205 20:11:18.742551  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:18.743063  572141 main.go:141] libmachine: (test-preload-572068) DBG | unable to find current IP address of domain test-preload-572068 in network mk-test-preload-572068
	I1205 20:11:18.743092  572141 main.go:141] libmachine: (test-preload-572068) DBG | I1205 20:11:18.742997  572223 retry.go:31] will retry after 532.145432ms: waiting for machine to come up
	I1205 20:11:19.276768  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:19.277185  572141 main.go:141] libmachine: (test-preload-572068) DBG | unable to find current IP address of domain test-preload-572068 in network mk-test-preload-572068
	I1205 20:11:19.277218  572141 main.go:141] libmachine: (test-preload-572068) DBG | I1205 20:11:19.277132  572223 retry.go:31] will retry after 642.780831ms: waiting for machine to come up
	I1205 20:11:19.922040  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:19.922490  572141 main.go:141] libmachine: (test-preload-572068) DBG | unable to find current IP address of domain test-preload-572068 in network mk-test-preload-572068
	I1205 20:11:19.922524  572141 main.go:141] libmachine: (test-preload-572068) DBG | I1205 20:11:19.922428  572223 retry.go:31] will retry after 692.770687ms: waiting for machine to come up
	I1205 20:11:20.616326  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:20.616755  572141 main.go:141] libmachine: (test-preload-572068) DBG | unable to find current IP address of domain test-preload-572068 in network mk-test-preload-572068
	I1205 20:11:20.616781  572141 main.go:141] libmachine: (test-preload-572068) DBG | I1205 20:11:20.616722  572223 retry.go:31] will retry after 774.412473ms: waiting for machine to come up
	I1205 20:11:21.392596  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:21.392982  572141 main.go:141] libmachine: (test-preload-572068) DBG | unable to find current IP address of domain test-preload-572068 in network mk-test-preload-572068
	I1205 20:11:21.393013  572141 main.go:141] libmachine: (test-preload-572068) DBG | I1205 20:11:21.392918  572223 retry.go:31] will retry after 1.049617305s: waiting for machine to come up
	I1205 20:11:22.443839  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:22.444358  572141 main.go:141] libmachine: (test-preload-572068) DBG | unable to find current IP address of domain test-preload-572068 in network mk-test-preload-572068
	I1205 20:11:22.444396  572141 main.go:141] libmachine: (test-preload-572068) DBG | I1205 20:11:22.444313  572223 retry.go:31] will retry after 1.210823884s: waiting for machine to come up
	I1205 20:11:23.656575  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:23.657071  572141 main.go:141] libmachine: (test-preload-572068) DBG | unable to find current IP address of domain test-preload-572068 in network mk-test-preload-572068
	I1205 20:11:23.657098  572141 main.go:141] libmachine: (test-preload-572068) DBG | I1205 20:11:23.657016  572223 retry.go:31] will retry after 2.254928766s: waiting for machine to come up
	I1205 20:11:25.913459  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:25.913845  572141 main.go:141] libmachine: (test-preload-572068) DBG | unable to find current IP address of domain test-preload-572068 in network mk-test-preload-572068
	I1205 20:11:25.913869  572141 main.go:141] libmachine: (test-preload-572068) DBG | I1205 20:11:25.913805  572223 retry.go:31] will retry after 2.342837683s: waiting for machine to come up
	I1205 20:11:28.257836  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:28.258285  572141 main.go:141] libmachine: (test-preload-572068) DBG | unable to find current IP address of domain test-preload-572068 in network mk-test-preload-572068
	I1205 20:11:28.258324  572141 main.go:141] libmachine: (test-preload-572068) DBG | I1205 20:11:28.258259  572223 retry.go:31] will retry after 3.026240297s: waiting for machine to come up
	I1205 20:11:31.288526  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:31.288885  572141 main.go:141] libmachine: (test-preload-572068) DBG | unable to find current IP address of domain test-preload-572068 in network mk-test-preload-572068
	I1205 20:11:31.288940  572141 main.go:141] libmachine: (test-preload-572068) DBG | I1205 20:11:31.288830  572223 retry.go:31] will retry after 3.551625065s: waiting for machine to come up
	I1205 20:11:34.844201  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:34.844574  572141 main.go:141] libmachine: (test-preload-572068) Found IP for machine: 192.168.39.29
	I1205 20:11:34.844607  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has current primary IP address 192.168.39.29 and MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:34.844620  572141 main.go:141] libmachine: (test-preload-572068) Reserving static IP address...
	I1205 20:11:34.845142  572141 main.go:141] libmachine: (test-preload-572068) Reserved static IP address: 192.168.39.29
	I1205 20:11:34.845178  572141 main.go:141] libmachine: (test-preload-572068) Waiting for SSH to be available...
	I1205 20:11:34.845203  572141 main.go:141] libmachine: (test-preload-572068) DBG | found host DHCP lease matching {name: "test-preload-572068", mac: "52:54:00:78:06:af", ip: "192.168.39.29"} in network mk-test-preload-572068: {Iface:virbr1 ExpiryTime:2024-12-05 21:11:28 +0000 UTC Type:0 Mac:52:54:00:78:06:af Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:test-preload-572068 Clientid:01:52:54:00:78:06:af}
	I1205 20:11:34.845230  572141 main.go:141] libmachine: (test-preload-572068) DBG | skip adding static IP to network mk-test-preload-572068 - found existing host DHCP lease matching {name: "test-preload-572068", mac: "52:54:00:78:06:af", ip: "192.168.39.29"}
	I1205 20:11:34.845246  572141 main.go:141] libmachine: (test-preload-572068) DBG | Getting to WaitForSSH function...
	I1205 20:11:34.847221  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:34.847528  572141 main.go:141] libmachine: (test-preload-572068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:06:af", ip: ""} in network mk-test-preload-572068: {Iface:virbr1 ExpiryTime:2024-12-05 21:11:28 +0000 UTC Type:0 Mac:52:54:00:78:06:af Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:test-preload-572068 Clientid:01:52:54:00:78:06:af}
	I1205 20:11:34.847557  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined IP address 192.168.39.29 and MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:34.847671  572141 main.go:141] libmachine: (test-preload-572068) DBG | Using SSH client type: external
	I1205 20:11:34.847697  572141 main.go:141] libmachine: (test-preload-572068) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/test-preload-572068/id_rsa (-rw-------)
	I1205 20:11:34.847765  572141 main.go:141] libmachine: (test-preload-572068) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.29 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/test-preload-572068/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:11:34.847794  572141 main.go:141] libmachine: (test-preload-572068) DBG | About to run SSH command:
	I1205 20:11:34.847815  572141 main.go:141] libmachine: (test-preload-572068) DBG | exit 0
	I1205 20:11:34.972582  572141 main.go:141] libmachine: (test-preload-572068) DBG | SSH cmd err, output: <nil>: 
	I1205 20:11:34.973032  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetConfigRaw
	I1205 20:11:34.973700  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetIP
	I1205 20:11:34.976173  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:34.976477  572141 main.go:141] libmachine: (test-preload-572068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:06:af", ip: ""} in network mk-test-preload-572068: {Iface:virbr1 ExpiryTime:2024-12-05 21:11:28 +0000 UTC Type:0 Mac:52:54:00:78:06:af Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:test-preload-572068 Clientid:01:52:54:00:78:06:af}
	I1205 20:11:34.976508  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined IP address 192.168.39.29 and MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:34.976704  572141 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/test-preload-572068/config.json ...
	I1205 20:11:34.976937  572141 machine.go:93] provisionDockerMachine start ...
	I1205 20:11:34.976981  572141 main.go:141] libmachine: (test-preload-572068) Calling .DriverName
	I1205 20:11:34.977200  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHHostname
	I1205 20:11:34.979384  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:34.979651  572141 main.go:141] libmachine: (test-preload-572068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:06:af", ip: ""} in network mk-test-preload-572068: {Iface:virbr1 ExpiryTime:2024-12-05 21:11:28 +0000 UTC Type:0 Mac:52:54:00:78:06:af Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:test-preload-572068 Clientid:01:52:54:00:78:06:af}
	I1205 20:11:34.979682  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined IP address 192.168.39.29 and MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:34.979852  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHPort
	I1205 20:11:34.980019  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHKeyPath
	I1205 20:11:34.980192  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHKeyPath
	I1205 20:11:34.980316  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHUsername
	I1205 20:11:34.980501  572141 main.go:141] libmachine: Using SSH client type: native
	I1205 20:11:34.980768  572141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I1205 20:11:34.980777  572141 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:11:35.089322  572141 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 20:11:35.089367  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetMachineName
	I1205 20:11:35.089616  572141 buildroot.go:166] provisioning hostname "test-preload-572068"
	I1205 20:11:35.089648  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetMachineName
	I1205 20:11:35.089866  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHHostname
	I1205 20:11:35.092513  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:35.092847  572141 main.go:141] libmachine: (test-preload-572068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:06:af", ip: ""} in network mk-test-preload-572068: {Iface:virbr1 ExpiryTime:2024-12-05 21:11:28 +0000 UTC Type:0 Mac:52:54:00:78:06:af Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:test-preload-572068 Clientid:01:52:54:00:78:06:af}
	I1205 20:11:35.092888  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined IP address 192.168.39.29 and MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:35.093086  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHPort
	I1205 20:11:35.093291  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHKeyPath
	I1205 20:11:35.093432  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHKeyPath
	I1205 20:11:35.093562  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHUsername
	I1205 20:11:35.093769  572141 main.go:141] libmachine: Using SSH client type: native
	I1205 20:11:35.094013  572141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I1205 20:11:35.094034  572141 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-572068 && echo "test-preload-572068" | sudo tee /etc/hostname
	I1205 20:11:35.215117  572141 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-572068
	
	I1205 20:11:35.215149  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHHostname
	I1205 20:11:35.217998  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:35.218280  572141 main.go:141] libmachine: (test-preload-572068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:06:af", ip: ""} in network mk-test-preload-572068: {Iface:virbr1 ExpiryTime:2024-12-05 21:11:28 +0000 UTC Type:0 Mac:52:54:00:78:06:af Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:test-preload-572068 Clientid:01:52:54:00:78:06:af}
	I1205 20:11:35.218316  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined IP address 192.168.39.29 and MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:35.218445  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHPort
	I1205 20:11:35.218653  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHKeyPath
	I1205 20:11:35.218823  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHKeyPath
	I1205 20:11:35.219005  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHUsername
	I1205 20:11:35.219153  572141 main.go:141] libmachine: Using SSH client type: native
	I1205 20:11:35.219377  572141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I1205 20:11:35.219395  572141 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-572068' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-572068/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-572068' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:11:35.337591  572141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:11:35.337623  572141 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:11:35.337664  572141 buildroot.go:174] setting up certificates
	I1205 20:11:35.337675  572141 provision.go:84] configureAuth start
	I1205 20:11:35.337685  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetMachineName
	I1205 20:11:35.338058  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetIP
	I1205 20:11:35.341136  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:35.341445  572141 main.go:141] libmachine: (test-preload-572068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:06:af", ip: ""} in network mk-test-preload-572068: {Iface:virbr1 ExpiryTime:2024-12-05 21:11:28 +0000 UTC Type:0 Mac:52:54:00:78:06:af Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:test-preload-572068 Clientid:01:52:54:00:78:06:af}
	I1205 20:11:35.341494  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined IP address 192.168.39.29 and MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:35.341617  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHHostname
	I1205 20:11:35.344259  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:35.344600  572141 main.go:141] libmachine: (test-preload-572068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:06:af", ip: ""} in network mk-test-preload-572068: {Iface:virbr1 ExpiryTime:2024-12-05 21:11:28 +0000 UTC Type:0 Mac:52:54:00:78:06:af Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:test-preload-572068 Clientid:01:52:54:00:78:06:af}
	I1205 20:11:35.344631  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined IP address 192.168.39.29 and MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:35.344801  572141 provision.go:143] copyHostCerts
	I1205 20:11:35.344865  572141 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:11:35.344886  572141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:11:35.344956  572141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:11:35.345077  572141 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:11:35.345093  572141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:11:35.345134  572141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:11:35.345216  572141 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:11:35.345226  572141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:11:35.345253  572141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:11:35.345317  572141 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.test-preload-572068 san=[127.0.0.1 192.168.39.29 localhost minikube test-preload-572068]
	I1205 20:11:35.496671  572141 provision.go:177] copyRemoteCerts
	I1205 20:11:35.496737  572141 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:11:35.496773  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHHostname
	I1205 20:11:35.499568  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:35.499982  572141 main.go:141] libmachine: (test-preload-572068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:06:af", ip: ""} in network mk-test-preload-572068: {Iface:virbr1 ExpiryTime:2024-12-05 21:11:28 +0000 UTC Type:0 Mac:52:54:00:78:06:af Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:test-preload-572068 Clientid:01:52:54:00:78:06:af}
	I1205 20:11:35.500014  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined IP address 192.168.39.29 and MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:35.500212  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHPort
	I1205 20:11:35.500480  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHKeyPath
	I1205 20:11:35.500631  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHUsername
	I1205 20:11:35.500762  572141 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/test-preload-572068/id_rsa Username:docker}
	I1205 20:11:35.583460  572141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:11:35.609332  572141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1205 20:11:35.634744  572141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:11:35.659720  572141 provision.go:87] duration metric: took 322.030089ms to configureAuth
	I1205 20:11:35.659755  572141 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:11:35.659979  572141 config.go:182] Loaded profile config "test-preload-572068": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1205 20:11:35.660074  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHHostname
	I1205 20:11:35.662956  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:35.663338  572141 main.go:141] libmachine: (test-preload-572068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:06:af", ip: ""} in network mk-test-preload-572068: {Iface:virbr1 ExpiryTime:2024-12-05 21:11:28 +0000 UTC Type:0 Mac:52:54:00:78:06:af Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:test-preload-572068 Clientid:01:52:54:00:78:06:af}
	I1205 20:11:35.663366  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined IP address 192.168.39.29 and MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:35.663564  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHPort
	I1205 20:11:35.663792  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHKeyPath
	I1205 20:11:35.663971  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHKeyPath
	I1205 20:11:35.664091  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHUsername
	I1205 20:11:35.664250  572141 main.go:141] libmachine: Using SSH client type: native
	I1205 20:11:35.664505  572141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I1205 20:11:35.664523  572141 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:11:35.904683  572141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:11:35.904710  572141 machine.go:96] duration metric: took 927.756865ms to provisionDockerMachine
	I1205 20:11:35.904725  572141 start.go:293] postStartSetup for "test-preload-572068" (driver="kvm2")
	I1205 20:11:35.904736  572141 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:11:35.904755  572141 main.go:141] libmachine: (test-preload-572068) Calling .DriverName
	I1205 20:11:35.905103  572141 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:11:35.905162  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHHostname
	I1205 20:11:35.907669  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:35.908007  572141 main.go:141] libmachine: (test-preload-572068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:06:af", ip: ""} in network mk-test-preload-572068: {Iface:virbr1 ExpiryTime:2024-12-05 21:11:28 +0000 UTC Type:0 Mac:52:54:00:78:06:af Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:test-preload-572068 Clientid:01:52:54:00:78:06:af}
	I1205 20:11:35.908035  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined IP address 192.168.39.29 and MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:35.908179  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHPort
	I1205 20:11:35.908424  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHKeyPath
	I1205 20:11:35.908594  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHUsername
	I1205 20:11:35.908729  572141 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/test-preload-572068/id_rsa Username:docker}
	I1205 20:11:35.993843  572141 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:11:35.998432  572141 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:11:35.998469  572141 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:11:35.998535  572141 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:11:35.998610  572141 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:11:35.998699  572141 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:11:36.010996  572141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:11:36.038745  572141 start.go:296] duration metric: took 134.004009ms for postStartSetup
	I1205 20:11:36.038791  572141 fix.go:56] duration metric: took 19.49398078s for fixHost
	I1205 20:11:36.038814  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHHostname
	I1205 20:11:36.041673  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:36.042062  572141 main.go:141] libmachine: (test-preload-572068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:06:af", ip: ""} in network mk-test-preload-572068: {Iface:virbr1 ExpiryTime:2024-12-05 21:11:28 +0000 UTC Type:0 Mac:52:54:00:78:06:af Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:test-preload-572068 Clientid:01:52:54:00:78:06:af}
	I1205 20:11:36.042087  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined IP address 192.168.39.29 and MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:36.042289  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHPort
	I1205 20:11:36.042496  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHKeyPath
	I1205 20:11:36.042688  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHKeyPath
	I1205 20:11:36.042844  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHUsername
	I1205 20:11:36.043042  572141 main.go:141] libmachine: Using SSH client type: native
	I1205 20:11:36.043355  572141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I1205 20:11:36.043373  572141 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:11:36.149420  572141 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733429496.119941769
	
	I1205 20:11:36.149453  572141 fix.go:216] guest clock: 1733429496.119941769
	I1205 20:11:36.149467  572141 fix.go:229] Guest: 2024-12-05 20:11:36.119941769 +0000 UTC Remote: 2024-12-05 20:11:36.038795619 +0000 UTC m=+32.954351197 (delta=81.14615ms)
	I1205 20:11:36.149522  572141 fix.go:200] guest clock delta is within tolerance: 81.14615ms
	I1205 20:11:36.149530  572141 start.go:83] releasing machines lock for "test-preload-572068", held for 19.604734474s
	I1205 20:11:36.149562  572141 main.go:141] libmachine: (test-preload-572068) Calling .DriverName
	I1205 20:11:36.149932  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetIP
	I1205 20:11:36.152474  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:36.152755  572141 main.go:141] libmachine: (test-preload-572068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:06:af", ip: ""} in network mk-test-preload-572068: {Iface:virbr1 ExpiryTime:2024-12-05 21:11:28 +0000 UTC Type:0 Mac:52:54:00:78:06:af Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:test-preload-572068 Clientid:01:52:54:00:78:06:af}
	I1205 20:11:36.152790  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined IP address 192.168.39.29 and MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:36.152899  572141 main.go:141] libmachine: (test-preload-572068) Calling .DriverName
	I1205 20:11:36.153440  572141 main.go:141] libmachine: (test-preload-572068) Calling .DriverName
	I1205 20:11:36.153617  572141 main.go:141] libmachine: (test-preload-572068) Calling .DriverName
	I1205 20:11:36.153743  572141 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:11:36.153781  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHHostname
	I1205 20:11:36.153880  572141 ssh_runner.go:195] Run: cat /version.json
	I1205 20:11:36.153913  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHHostname
	I1205 20:11:36.156449  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:36.156819  572141 main.go:141] libmachine: (test-preload-572068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:06:af", ip: ""} in network mk-test-preload-572068: {Iface:virbr1 ExpiryTime:2024-12-05 21:11:28 +0000 UTC Type:0 Mac:52:54:00:78:06:af Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:test-preload-572068 Clientid:01:52:54:00:78:06:af}
	I1205 20:11:36.156850  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined IP address 192.168.39.29 and MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:36.156869  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:36.157070  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHPort
	I1205 20:11:36.157226  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHKeyPath
	I1205 20:11:36.157354  572141 main.go:141] libmachine: (test-preload-572068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:06:af", ip: ""} in network mk-test-preload-572068: {Iface:virbr1 ExpiryTime:2024-12-05 21:11:28 +0000 UTC Type:0 Mac:52:54:00:78:06:af Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:test-preload-572068 Clientid:01:52:54:00:78:06:af}
	I1205 20:11:36.157382  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHUsername
	I1205 20:11:36.157451  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined IP address 192.168.39.29 and MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:36.157490  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHPort
	I1205 20:11:36.157540  572141 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/test-preload-572068/id_rsa Username:docker}
	I1205 20:11:36.157886  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHKeyPath
	I1205 20:11:36.158086  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHUsername
	I1205 20:11:36.158225  572141 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/test-preload-572068/id_rsa Username:docker}
	I1205 20:11:36.253713  572141 ssh_runner.go:195] Run: systemctl --version
	I1205 20:11:36.260359  572141 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:11:36.410002  572141 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:11:36.416436  572141 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:11:36.416503  572141 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:11:36.433289  572141 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:11:36.433318  572141 start.go:495] detecting cgroup driver to use...
	I1205 20:11:36.433392  572141 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:11:36.449900  572141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:11:36.464586  572141 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:11:36.464645  572141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:11:36.478714  572141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:11:36.492840  572141 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:11:36.609628  572141 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:11:36.776069  572141 docker.go:233] disabling docker service ...
	I1205 20:11:36.776140  572141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:11:36.790864  572141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:11:36.805066  572141 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:11:36.924661  572141 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:11:37.035316  572141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:11:37.049807  572141 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:11:37.069625  572141 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1205 20:11:37.069700  572141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:11:37.080426  572141 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:11:37.080507  572141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:11:37.091379  572141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:11:37.101986  572141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:11:37.113319  572141 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:11:37.124568  572141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:11:37.135144  572141 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:11:37.153149  572141 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:11:37.163778  572141 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:11:37.173542  572141 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:11:37.173612  572141 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:11:37.187318  572141 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:11:37.196939  572141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:11:37.323653  572141 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:11:37.415613  572141 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:11:37.415704  572141 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:11:37.420939  572141 start.go:563] Will wait 60s for crictl version
	I1205 20:11:37.421024  572141 ssh_runner.go:195] Run: which crictl
	I1205 20:11:37.424980  572141 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:11:37.473055  572141 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:11:37.473155  572141 ssh_runner.go:195] Run: crio --version
	I1205 20:11:37.501487  572141 ssh_runner.go:195] Run: crio --version
	I1205 20:11:37.533533  572141 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I1205 20:11:37.535172  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetIP
	I1205 20:11:37.538352  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:37.538777  572141 main.go:141] libmachine: (test-preload-572068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:06:af", ip: ""} in network mk-test-preload-572068: {Iface:virbr1 ExpiryTime:2024-12-05 21:11:28 +0000 UTC Type:0 Mac:52:54:00:78:06:af Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:test-preload-572068 Clientid:01:52:54:00:78:06:af}
	I1205 20:11:37.538813  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined IP address 192.168.39.29 and MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:11:37.539028  572141 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:11:37.543743  572141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:11:37.557412  572141 kubeadm.go:883] updating cluster {Name:test-preload-572068 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-572068 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:11:37.557542  572141 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1205 20:11:37.557600  572141 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:11:37.600342  572141 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1205 20:11:37.600435  572141 ssh_runner.go:195] Run: which lz4
	I1205 20:11:37.604698  572141 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 20:11:37.609494  572141 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:11:37.609529  572141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I1205 20:11:39.268357  572141 crio.go:462] duration metric: took 1.663693682s to copy over tarball
	I1205 20:11:39.268445  572141 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:11:41.698970  572141 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.430486442s)
	I1205 20:11:41.699003  572141 crio.go:469] duration metric: took 2.430609373s to extract the tarball
	I1205 20:11:41.699014  572141 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:11:41.741141  572141 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:11:41.785105  572141 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1205 20:11:41.785144  572141 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 20:11:41.785265  572141 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:11:41.785309  572141 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1205 20:11:41.785270  572141 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1205 20:11:41.785338  572141 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1205 20:11:41.785347  572141 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1205 20:11:41.785310  572141 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1205 20:11:41.785276  572141 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1205 20:11:41.785273  572141 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1205 20:11:41.786901  572141 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:11:41.786922  572141 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1205 20:11:41.786932  572141 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1205 20:11:41.786932  572141 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1205 20:11:41.786909  572141 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1205 20:11:41.786920  572141 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1205 20:11:41.786903  572141 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1205 20:11:41.787007  572141 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1205 20:11:41.943510  572141 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1205 20:11:41.988971  572141 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1205 20:11:41.989012  572141 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1205 20:11:41.989053  572141 ssh_runner.go:195] Run: which crictl
	I1205 20:11:41.989813  572141 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1205 20:11:41.993408  572141 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1205 20:11:42.031362  572141 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I1205 20:11:42.047432  572141 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1205 20:11:42.047491  572141 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1205 20:11:42.047551  572141 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1205 20:11:42.047555  572141 ssh_runner.go:195] Run: which crictl
	I1205 20:11:42.081452  572141 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I1205 20:11:42.081507  572141 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I1205 20:11:42.081562  572141 ssh_runner.go:195] Run: which crictl
	I1205 20:11:42.089596  572141 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I1205 20:11:42.093670  572141 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1205 20:11:42.100253  572141 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I1205 20:11:42.101925  572141 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1205 20:11:42.101964  572141 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1205 20:11:42.101986  572141 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1205 20:11:42.128078  572141 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I1205 20:11:42.264955  572141 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I1205 20:11:42.264997  572141 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I1205 20:11:42.265049  572141 ssh_runner.go:195] Run: which crictl
	I1205 20:11:42.265047  572141 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1205 20:11:42.265078  572141 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1205 20:11:42.265129  572141 ssh_runner.go:195] Run: which crictl
	I1205 20:11:42.265164  572141 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I1205 20:11:42.265198  572141 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1205 20:11:42.265255  572141 ssh_runner.go:195] Run: which crictl
	I1205 20:11:42.265300  572141 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1205 20:11:42.265395  572141 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1205 20:11:42.265416  572141 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1205 20:11:42.265489  572141 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1205 20:11:42.276405  572141 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1205 20:11:42.302431  572141 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I1205 20:11:42.302492  572141 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I1205 20:11:42.302538  572141 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1205 20:11:42.302542  572141 ssh_runner.go:195] Run: which crictl
	I1205 20:11:42.361283  572141 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1205 20:11:42.361349  572141 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1205 20:11:42.361383  572141 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1205 20:11:42.361308  572141 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1205 20:11:42.361453  572141 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I1205 20:11:42.361479  572141 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1205 20:11:42.378596  572141 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1205 20:11:42.378626  572141 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1205 20:11:42.391766  572141 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1205 20:11:42.539425  572141 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1205 20:11:42.539521  572141 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1205 20:11:42.539555  572141 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1205 20:11:42.539609  572141 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1205 20:11:42.539632  572141 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1205 20:11:43.014603  572141 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:11:45.174516  572141 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.7: (2.813011265s)
	I1205 20:11:45.174560  572141 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1205 20:11:45.174627  572141 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4: (2.795974427s)
	I1205 20:11:45.174690  572141 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4: (2.79606637s)
	I1205 20:11:45.174703  572141 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1205 20:11:45.174785  572141 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1205 20:11:45.174792  572141 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0: (2.782993503s)
	I1205 20:11:45.174844  572141 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4: (2.635270441s)
	I1205 20:11:45.174902  572141 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I1205 20:11:45.174912  572141 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1205 20:11:45.174917  572141 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4: (2.635267814s)
	I1205 20:11:45.174953  572141 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1205 20:11:45.174847  572141 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1205 20:11:45.174968  572141 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1205 20:11:45.175017  572141 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: (2.635382585s)
	I1205 20:11:45.175050  572141 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.160415953s)
	I1205 20:11:45.175060  572141 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1205 20:11:45.275553  572141 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1205 20:11:45.275629  572141 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1205 20:11:45.275685  572141 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1205 20:11:45.721480  572141 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I1205 20:11:45.721518  572141 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1205 20:11:45.721562  572141 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1205 20:11:45.721648  572141 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1205 20:11:45.721738  572141 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I1205 20:11:45.721669  572141 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1205 20:11:45.721827  572141 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1205 20:11:45.721827  572141 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1205 20:11:45.722022  572141 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I1205 20:11:45.721852  572141 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1205 20:11:46.170745  572141 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1205 20:11:46.170801  572141 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1205 20:11:46.170872  572141 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1205 20:11:46.170891  572141 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1205 20:11:46.170933  572141 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I1205 20:11:46.170999  572141 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I1205 20:11:46.922511  572141 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I1205 20:11:46.922559  572141 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1205 20:11:46.922618  572141 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1205 20:11:49.072931  572141 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.150286011s)
	I1205 20:11:49.072973  572141 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1205 20:11:49.073015  572141 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I1205 20:11:49.073069  572141 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I1205 20:11:49.921027  572141 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I1205 20:11:49.921085  572141 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1205 20:11:49.921132  572141 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1205 20:11:50.666443  572141 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I1205 20:11:50.666505  572141 cache_images.go:123] Successfully loaded all cached images
	I1205 20:11:50.666513  572141 cache_images.go:92] duration metric: took 8.881352596s to LoadCachedImages
	I1205 20:11:50.666530  572141 kubeadm.go:934] updating node { 192.168.39.29 8443 v1.24.4 crio true true} ...
	I1205 20:11:50.666660  572141 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-572068 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-572068 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:11:50.666755  572141 ssh_runner.go:195] Run: crio config
	I1205 20:11:50.718251  572141 cni.go:84] Creating CNI manager for ""
	I1205 20:11:50.718274  572141 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:11:50.718284  572141 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:11:50.718312  572141 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.29 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-572068 NodeName:test-preload-572068 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:11:50.718469  572141 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-572068"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:11:50.718535  572141 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I1205 20:11:50.729166  572141 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:11:50.729248  572141 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:11:50.739111  572141 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1205 20:11:50.756603  572141 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:11:50.773977  572141 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1205 20:11:50.791831  572141 ssh_runner.go:195] Run: grep 192.168.39.29	control-plane.minikube.internal$ /etc/hosts
	I1205 20:11:50.796022  572141 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.29	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:11:50.808729  572141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:11:50.946242  572141 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:11:50.964409  572141 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/test-preload-572068 for IP: 192.168.39.29
	I1205 20:11:50.964431  572141 certs.go:194] generating shared ca certs ...
	I1205 20:11:50.964449  572141 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:11:50.964591  572141 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:11:50.964645  572141 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:11:50.964655  572141 certs.go:256] generating profile certs ...
	I1205 20:11:50.964753  572141 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/test-preload-572068/client.key
	I1205 20:11:50.964832  572141 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/test-preload-572068/apiserver.key.eafcf994
	I1205 20:11:50.964874  572141 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/test-preload-572068/proxy-client.key
	I1205 20:11:50.965007  572141 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:11:50.965044  572141 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:11:50.965062  572141 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:11:50.965100  572141 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:11:50.965132  572141 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:11:50.965165  572141 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:11:50.965210  572141 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:11:50.966081  572141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:11:51.009729  572141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:11:51.046563  572141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:11:51.080147  572141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:11:51.117444  572141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/test-preload-572068/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1205 20:11:51.150891  572141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/test-preload-572068/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:11:51.187330  572141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/test-preload-572068/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:11:51.212347  572141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/test-preload-572068/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:11:51.238432  572141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:11:51.263609  572141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:11:51.289195  572141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:11:51.314625  572141 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:11:51.333117  572141 ssh_runner.go:195] Run: openssl version
	I1205 20:11:51.339228  572141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:11:51.350289  572141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:11:51.354827  572141 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:11:51.354903  572141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:11:51.360631  572141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:11:51.371343  572141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:11:51.382040  572141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:11:51.386618  572141 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:11:51.386657  572141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:11:51.392376  572141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:11:51.403417  572141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:11:51.414187  572141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:11:51.418741  572141 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:11:51.418787  572141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:11:51.424567  572141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:11:51.435647  572141 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:11:51.440433  572141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:11:51.446339  572141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:11:51.452324  572141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:11:51.458283  572141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:11:51.463962  572141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:11:51.469600  572141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:11:51.475297  572141 kubeadm.go:392] StartCluster: {Name:test-preload-572068 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-572068 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:11:51.475379  572141 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:11:51.475416  572141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:11:51.512178  572141 cri.go:89] found id: ""
	I1205 20:11:51.512257  572141 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:11:51.522640  572141 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 20:11:51.522660  572141 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 20:11:51.522703  572141 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:11:51.532594  572141 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:11:51.533038  572141 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-572068" does not appear in /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:11:51.533162  572141 kubeconfig.go:62] /home/jenkins/minikube-integration/20052-530897/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-572068" cluster setting kubeconfig missing "test-preload-572068" context setting]
	I1205 20:11:51.533485  572141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:11:51.534133  572141 kapi.go:59] client config for test-preload-572068: &rest.Config{Host:"https://192.168.39.29:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/profiles/test-preload-572068/client.crt", KeyFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/profiles/test-preload-572068/client.key", CAFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 20:11:51.534807  572141 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:11:51.544667  572141 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.29
	I1205 20:11:51.544698  572141 kubeadm.go:1160] stopping kube-system containers ...
	I1205 20:11:51.544710  572141 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:11:51.544757  572141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:11:51.586617  572141 cri.go:89] found id: ""
	I1205 20:11:51.586699  572141 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:11:51.603298  572141 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:11:51.613468  572141 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:11:51.613498  572141 kubeadm.go:157] found existing configuration files:
	
	I1205 20:11:51.613563  572141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:11:51.622982  572141 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:11:51.623061  572141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:11:51.632935  572141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:11:51.642593  572141 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:11:51.642652  572141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:11:51.652611  572141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:11:51.661942  572141 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:11:51.662007  572141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:11:51.671963  572141 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:11:51.681548  572141 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:11:51.681611  572141 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:11:51.691642  572141 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:11:51.701967  572141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:11:51.813094  572141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:11:52.503925  572141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:11:52.759434  572141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:11:52.817533  572141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:11:52.884112  572141 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:11:52.884197  572141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:11:53.384874  572141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:11:53.884417  572141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:11:54.384476  572141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:11:54.402989  572141 api_server.go:72] duration metric: took 1.518871436s to wait for apiserver process to appear ...
	I1205 20:11:54.403035  572141 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:11:54.403076  572141 api_server.go:253] Checking apiserver healthz at https://192.168.39.29:8443/healthz ...
	I1205 20:11:54.403622  572141 api_server.go:269] stopped: https://192.168.39.29:8443/healthz: Get "https://192.168.39.29:8443/healthz": dial tcp 192.168.39.29:8443: connect: connection refused
	I1205 20:11:54.903449  572141 api_server.go:253] Checking apiserver healthz at https://192.168.39.29:8443/healthz ...
	I1205 20:11:58.377931  572141 api_server.go:279] https://192.168.39.29:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:11:58.377987  572141 api_server.go:103] status: https://192.168.39.29:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:11:58.378007  572141 api_server.go:253] Checking apiserver healthz at https://192.168.39.29:8443/healthz ...
	I1205 20:11:58.404781  572141 api_server.go:279] https://192.168.39.29:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:11:58.404832  572141 api_server.go:103] status: https://192.168.39.29:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:11:58.404851  572141 api_server.go:253] Checking apiserver healthz at https://192.168.39.29:8443/healthz ...
	I1205 20:11:58.457237  572141 api_server.go:279] https://192.168.39.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:11:58.457276  572141 api_server.go:103] status: https://192.168.39.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:11:58.903877  572141 api_server.go:253] Checking apiserver healthz at https://192.168.39.29:8443/healthz ...
	I1205 20:11:58.918391  572141 api_server.go:279] https://192.168.39.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:11:58.918434  572141 api_server.go:103] status: https://192.168.39.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:11:59.404011  572141 api_server.go:253] Checking apiserver healthz at https://192.168.39.29:8443/healthz ...
	I1205 20:11:59.409339  572141 api_server.go:279] https://192.168.39.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:11:59.409370  572141 api_server.go:103] status: https://192.168.39.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:11:59.904043  572141 api_server.go:253] Checking apiserver healthz at https://192.168.39.29:8443/healthz ...
	I1205 20:11:59.910267  572141 api_server.go:279] https://192.168.39.29:8443/healthz returned 200:
	ok
	I1205 20:11:59.917012  572141 api_server.go:141] control plane version: v1.24.4
	I1205 20:11:59.917045  572141 api_server.go:131] duration metric: took 5.514001516s to wait for apiserver health ...
	I1205 20:11:59.917054  572141 cni.go:84] Creating CNI manager for ""
	I1205 20:11:59.917060  572141 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:11:59.919218  572141 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:11:59.921001  572141 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:11:59.935802  572141 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:11:59.963582  572141 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:11:59.963692  572141 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 20:11:59.963714  572141 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 20:11:59.979535  572141 system_pods.go:59] 7 kube-system pods found
	I1205 20:11:59.979571  572141 system_pods.go:61] "coredns-6d4b75cb6d-dr6cd" [0ca468a2-0ded-4c92-9a89-fe47b4401e47] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:11:59.979577  572141 system_pods.go:61] "etcd-test-preload-572068" [4c1dab2a-68be-496a-9d3e-2cf90cf76ced] Running
	I1205 20:11:59.979583  572141 system_pods.go:61] "kube-apiserver-test-preload-572068" [d48546a2-145c-4a15-84f6-d069a114f4c8] Running
	I1205 20:11:59.979587  572141 system_pods.go:61] "kube-controller-manager-test-preload-572068" [309c1e08-91c6-4c56-a22e-0e8905afb8f1] Running
	I1205 20:11:59.979592  572141 system_pods.go:61] "kube-proxy-tz9v5" [0a901a6f-740f-4706-867d-2876ede3881f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:11:59.979595  572141 system_pods.go:61] "kube-scheduler-test-preload-572068" [28e088e2-dffc-47fa-ad66-6436a040f3b6] Running
	I1205 20:11:59.979601  572141 system_pods.go:61] "storage-provisioner" [7c2715f4-c680-44f6-b863-fc18ed405e93] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:11:59.979614  572141 system_pods.go:74] duration metric: took 16.002804ms to wait for pod list to return data ...
	I1205 20:11:59.979621  572141 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:11:59.985194  572141 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:11:59.985226  572141 node_conditions.go:123] node cpu capacity is 2
	I1205 20:11:59.985237  572141 node_conditions.go:105] duration metric: took 5.61176ms to run NodePressure ...
	I1205 20:11:59.985265  572141 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:12:00.256995  572141 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 20:12:00.264096  572141 retry.go:31] will retry after 199.335573ms: kubelet not initialised
	I1205 20:12:00.469388  572141 retry.go:31] will retry after 452.057385ms: kubelet not initialised
	I1205 20:12:00.927006  572141 retry.go:31] will retry after 672.607692ms: kubelet not initialised
	I1205 20:12:01.605897  572141 retry.go:31] will retry after 909.116111ms: kubelet not initialised
	I1205 20:12:02.521429  572141 retry.go:31] will retry after 1.837205292s: kubelet not initialised
	I1205 20:12:04.365663  572141 kubeadm.go:739] kubelet initialised
	I1205 20:12:04.365698  572141 kubeadm.go:740] duration metric: took 4.108674944s waiting for restarted kubelet to initialise ...
	I1205 20:12:04.365710  572141 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:12:04.372680  572141 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-dr6cd" in "kube-system" namespace to be "Ready" ...
	I1205 20:12:04.381483  572141 pod_ready.go:98] node "test-preload-572068" hosting pod "coredns-6d4b75cb6d-dr6cd" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-572068" has status "Ready":"False"
	I1205 20:12:04.381523  572141 pod_ready.go:82] duration metric: took 8.811224ms for pod "coredns-6d4b75cb6d-dr6cd" in "kube-system" namespace to be "Ready" ...
	E1205 20:12:04.381536  572141 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-572068" hosting pod "coredns-6d4b75cb6d-dr6cd" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-572068" has status "Ready":"False"
	I1205 20:12:04.381548  572141 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-572068" in "kube-system" namespace to be "Ready" ...
	I1205 20:12:04.388394  572141 pod_ready.go:98] node "test-preload-572068" hosting pod "etcd-test-preload-572068" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-572068" has status "Ready":"False"
	I1205 20:12:04.388420  572141 pod_ready.go:82] duration metric: took 6.863841ms for pod "etcd-test-preload-572068" in "kube-system" namespace to be "Ready" ...
	E1205 20:12:04.388430  572141 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-572068" hosting pod "etcd-test-preload-572068" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-572068" has status "Ready":"False"
	I1205 20:12:04.388439  572141 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-572068" in "kube-system" namespace to be "Ready" ...
	I1205 20:12:04.393643  572141 pod_ready.go:98] node "test-preload-572068" hosting pod "kube-apiserver-test-preload-572068" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-572068" has status "Ready":"False"
	I1205 20:12:04.393675  572141 pod_ready.go:82] duration metric: took 5.224137ms for pod "kube-apiserver-test-preload-572068" in "kube-system" namespace to be "Ready" ...
	E1205 20:12:04.393688  572141 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-572068" hosting pod "kube-apiserver-test-preload-572068" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-572068" has status "Ready":"False"
	I1205 20:12:04.393696  572141 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-572068" in "kube-system" namespace to be "Ready" ...
	I1205 20:12:04.399925  572141 pod_ready.go:98] node "test-preload-572068" hosting pod "kube-controller-manager-test-preload-572068" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-572068" has status "Ready":"False"
	I1205 20:12:04.399958  572141 pod_ready.go:82] duration metric: took 6.25005ms for pod "kube-controller-manager-test-preload-572068" in "kube-system" namespace to be "Ready" ...
	E1205 20:12:04.399969  572141 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-572068" hosting pod "kube-controller-manager-test-preload-572068" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-572068" has status "Ready":"False"
	I1205 20:12:04.399977  572141 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tz9v5" in "kube-system" namespace to be "Ready" ...
	I1205 20:12:04.764138  572141 pod_ready.go:98] node "test-preload-572068" hosting pod "kube-proxy-tz9v5" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-572068" has status "Ready":"False"
	I1205 20:12:04.764163  572141 pod_ready.go:82] duration metric: took 364.174711ms for pod "kube-proxy-tz9v5" in "kube-system" namespace to be "Ready" ...
	E1205 20:12:04.764173  572141 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-572068" hosting pod "kube-proxy-tz9v5" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-572068" has status "Ready":"False"
	I1205 20:12:04.764179  572141 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-572068" in "kube-system" namespace to be "Ready" ...
	I1205 20:12:05.164088  572141 pod_ready.go:98] node "test-preload-572068" hosting pod "kube-scheduler-test-preload-572068" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-572068" has status "Ready":"False"
	I1205 20:12:05.164138  572141 pod_ready.go:82] duration metric: took 399.932135ms for pod "kube-scheduler-test-preload-572068" in "kube-system" namespace to be "Ready" ...
	E1205 20:12:05.164150  572141 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-572068" hosting pod "kube-scheduler-test-preload-572068" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-572068" has status "Ready":"False"
	I1205 20:12:05.164161  572141 pod_ready.go:39] duration metric: took 798.436075ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:12:05.164186  572141 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:12:05.177718  572141 ops.go:34] apiserver oom_adj: -16
	I1205 20:12:05.177743  572141 kubeadm.go:597] duration metric: took 13.655077377s to restartPrimaryControlPlane
	I1205 20:12:05.177753  572141 kubeadm.go:394] duration metric: took 13.702462256s to StartCluster
	I1205 20:12:05.177770  572141 settings.go:142] acquiring lock: {Name:mk53b9e6d652790a330d8f10370186624dd74692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:12:05.177841  572141 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:12:05.178542  572141 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:12:05.178769  572141 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:12:05.178836  572141 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 20:12:05.178935  572141 addons.go:69] Setting storage-provisioner=true in profile "test-preload-572068"
	I1205 20:12:05.178950  572141 addons.go:234] Setting addon storage-provisioner=true in "test-preload-572068"
	W1205 20:12:05.178958  572141 addons.go:243] addon storage-provisioner should already be in state true
	I1205 20:12:05.178975  572141 addons.go:69] Setting default-storageclass=true in profile "test-preload-572068"
	I1205 20:12:05.179023  572141 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-572068"
	I1205 20:12:05.179024  572141 config.go:182] Loaded profile config "test-preload-572068": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1205 20:12:05.178991  572141 host.go:66] Checking if "test-preload-572068" exists ...
	I1205 20:12:05.179392  572141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:12:05.179433  572141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:12:05.179476  572141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:12:05.179521  572141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:12:05.181325  572141 out.go:177] * Verifying Kubernetes components...
	I1205 20:12:05.182629  572141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:12:05.194915  572141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39067
	I1205 20:12:05.195476  572141 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:12:05.196089  572141 main.go:141] libmachine: Using API Version  1
	I1205 20:12:05.196115  572141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:12:05.196497  572141 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:12:05.196698  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetState
	I1205 20:12:05.199078  572141 kapi.go:59] client config for test-preload-572068: &rest.Config{Host:"https://192.168.39.29:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/profiles/test-preload-572068/client.crt", KeyFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/profiles/test-preload-572068/client.key", CAFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 20:12:05.199318  572141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35003
	I1205 20:12:05.199415  572141 addons.go:234] Setting addon default-storageclass=true in "test-preload-572068"
	W1205 20:12:05.199437  572141 addons.go:243] addon default-storageclass should already be in state true
	I1205 20:12:05.199479  572141 host.go:66] Checking if "test-preload-572068" exists ...
	I1205 20:12:05.199780  572141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:12:05.199821  572141 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:12:05.199842  572141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:12:05.200391  572141 main.go:141] libmachine: Using API Version  1
	I1205 20:12:05.200418  572141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:12:05.200767  572141 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:12:05.201240  572141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:12:05.201280  572141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:12:05.215316  572141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37047
	I1205 20:12:05.215748  572141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33971
	I1205 20:12:05.215809  572141 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:12:05.216194  572141 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:12:05.216469  572141 main.go:141] libmachine: Using API Version  1
	I1205 20:12:05.216493  572141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:12:05.216728  572141 main.go:141] libmachine: Using API Version  1
	I1205 20:12:05.216755  572141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:12:05.216790  572141 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:12:05.217100  572141 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:12:05.217289  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetState
	I1205 20:12:05.217326  572141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:12:05.217372  572141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:12:05.218898  572141 main.go:141] libmachine: (test-preload-572068) Calling .DriverName
	I1205 20:12:05.221113  572141 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:12:05.222600  572141 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:12:05.222626  572141 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:12:05.222642  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHHostname
	I1205 20:12:05.225050  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:12:05.225462  572141 main.go:141] libmachine: (test-preload-572068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:06:af", ip: ""} in network mk-test-preload-572068: {Iface:virbr1 ExpiryTime:2024-12-05 21:11:28 +0000 UTC Type:0 Mac:52:54:00:78:06:af Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:test-preload-572068 Clientid:01:52:54:00:78:06:af}
	I1205 20:12:05.225490  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined IP address 192.168.39.29 and MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:12:05.225662  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHPort
	I1205 20:12:05.225784  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHKeyPath
	I1205 20:12:05.225870  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHUsername
	I1205 20:12:05.226010  572141 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/test-preload-572068/id_rsa Username:docker}
	I1205 20:12:05.255818  572141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37419
	I1205 20:12:05.256431  572141 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:12:05.256996  572141 main.go:141] libmachine: Using API Version  1
	I1205 20:12:05.257022  572141 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:12:05.257369  572141 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:12:05.257531  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetState
	I1205 20:12:05.258963  572141 main.go:141] libmachine: (test-preload-572068) Calling .DriverName
	I1205 20:12:05.259199  572141 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:12:05.259217  572141 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:12:05.259233  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHHostname
	I1205 20:12:05.261846  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:12:05.262281  572141 main.go:141] libmachine: (test-preload-572068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:06:af", ip: ""} in network mk-test-preload-572068: {Iface:virbr1 ExpiryTime:2024-12-05 21:11:28 +0000 UTC Type:0 Mac:52:54:00:78:06:af Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:test-preload-572068 Clientid:01:52:54:00:78:06:af}
	I1205 20:12:05.262313  572141 main.go:141] libmachine: (test-preload-572068) DBG | domain test-preload-572068 has defined IP address 192.168.39.29 and MAC address 52:54:00:78:06:af in network mk-test-preload-572068
	I1205 20:12:05.262460  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHPort
	I1205 20:12:05.262623  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHKeyPath
	I1205 20:12:05.262768  572141 main.go:141] libmachine: (test-preload-572068) Calling .GetSSHUsername
	I1205 20:12:05.262897  572141 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/test-preload-572068/id_rsa Username:docker}
	I1205 20:12:05.392129  572141 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:12:05.410744  572141 node_ready.go:35] waiting up to 6m0s for node "test-preload-572068" to be "Ready" ...
	I1205 20:12:05.472768  572141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:12:05.595592  572141 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:12:06.505983  572141 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.033168276s)
	I1205 20:12:06.506056  572141 main.go:141] libmachine: Making call to close driver server
	I1205 20:12:06.506072  572141 main.go:141] libmachine: (test-preload-572068) Calling .Close
	I1205 20:12:06.506423  572141 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:12:06.506449  572141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:12:06.506459  572141 main.go:141] libmachine: Making call to close driver server
	I1205 20:12:06.506467  572141 main.go:141] libmachine: (test-preload-572068) Calling .Close
	I1205 20:12:06.506721  572141 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:12:06.506740  572141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:12:06.519016  572141 main.go:141] libmachine: Making call to close driver server
	I1205 20:12:06.519046  572141 main.go:141] libmachine: (test-preload-572068) Calling .Close
	I1205 20:12:06.519359  572141 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:12:06.519386  572141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:12:06.543156  572141 main.go:141] libmachine: Making call to close driver server
	I1205 20:12:06.543188  572141 main.go:141] libmachine: (test-preload-572068) Calling .Close
	I1205 20:12:06.543509  572141 main.go:141] libmachine: (test-preload-572068) DBG | Closing plugin on server side
	I1205 20:12:06.543536  572141 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:12:06.543551  572141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:12:06.543557  572141 main.go:141] libmachine: Making call to close driver server
	I1205 20:12:06.543565  572141 main.go:141] libmachine: (test-preload-572068) Calling .Close
	I1205 20:12:06.543773  572141 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:12:06.543789  572141 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:12:06.546482  572141 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1205 20:12:06.547932  572141 addons.go:510] duration metric: took 1.369101998s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1205 20:12:07.414890  572141 node_ready.go:53] node "test-preload-572068" has status "Ready":"False"
	I1205 20:12:08.919557  572141 node_ready.go:49] node "test-preload-572068" has status "Ready":"True"
	I1205 20:12:08.919586  572141 node_ready.go:38] duration metric: took 3.508803475s for node "test-preload-572068" to be "Ready" ...
	I1205 20:12:08.919598  572141 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:12:08.925232  572141 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-dr6cd" in "kube-system" namespace to be "Ready" ...
	I1205 20:12:08.931305  572141 pod_ready.go:93] pod "coredns-6d4b75cb6d-dr6cd" in "kube-system" namespace has status "Ready":"True"
	I1205 20:12:08.931339  572141 pod_ready.go:82] duration metric: took 6.073551ms for pod "coredns-6d4b75cb6d-dr6cd" in "kube-system" namespace to be "Ready" ...
	I1205 20:12:08.931353  572141 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-572068" in "kube-system" namespace to be "Ready" ...
	I1205 20:12:08.936797  572141 pod_ready.go:93] pod "etcd-test-preload-572068" in "kube-system" namespace has status "Ready":"True"
	I1205 20:12:08.936819  572141 pod_ready.go:82] duration metric: took 5.457788ms for pod "etcd-test-preload-572068" in "kube-system" namespace to be "Ready" ...
	I1205 20:12:08.936827  572141 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-572068" in "kube-system" namespace to be "Ready" ...
	I1205 20:12:10.444176  572141 pod_ready.go:93] pod "kube-apiserver-test-preload-572068" in "kube-system" namespace has status "Ready":"True"
	I1205 20:12:10.444210  572141 pod_ready.go:82] duration metric: took 1.507374988s for pod "kube-apiserver-test-preload-572068" in "kube-system" namespace to be "Ready" ...
	I1205 20:12:10.444224  572141 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-572068" in "kube-system" namespace to be "Ready" ...
	I1205 20:12:11.450912  572141 pod_ready.go:93] pod "kube-controller-manager-test-preload-572068" in "kube-system" namespace has status "Ready":"True"
	I1205 20:12:11.450940  572141 pod_ready.go:82] duration metric: took 1.006708551s for pod "kube-controller-manager-test-preload-572068" in "kube-system" namespace to be "Ready" ...
	I1205 20:12:11.450967  572141 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tz9v5" in "kube-system" namespace to be "Ready" ...
	I1205 20:12:11.455547  572141 pod_ready.go:93] pod "kube-proxy-tz9v5" in "kube-system" namespace has status "Ready":"True"
	I1205 20:12:11.455575  572141 pod_ready.go:82] duration metric: took 4.598607ms for pod "kube-proxy-tz9v5" in "kube-system" namespace to be "Ready" ...
	I1205 20:12:11.455586  572141 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-572068" in "kube-system" namespace to be "Ready" ...
	I1205 20:12:11.715152  572141 pod_ready.go:93] pod "kube-scheduler-test-preload-572068" in "kube-system" namespace has status "Ready":"True"
	I1205 20:12:11.715177  572141 pod_ready.go:82] duration metric: took 259.582284ms for pod "kube-scheduler-test-preload-572068" in "kube-system" namespace to be "Ready" ...
	I1205 20:12:11.715189  572141 pod_ready.go:39] duration metric: took 2.795580503s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:12:11.715211  572141 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:12:11.715275  572141 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:12:11.731183  572141 api_server.go:72] duration metric: took 6.552380016s to wait for apiserver process to appear ...
	I1205 20:12:11.731226  572141 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:12:11.731254  572141 api_server.go:253] Checking apiserver healthz at https://192.168.39.29:8443/healthz ...
	I1205 20:12:11.737145  572141 api_server.go:279] https://192.168.39.29:8443/healthz returned 200:
	ok
	I1205 20:12:11.738062  572141 api_server.go:141] control plane version: v1.24.4
	I1205 20:12:11.738089  572141 api_server.go:131] duration metric: took 6.854199ms to wait for apiserver health ...
	I1205 20:12:11.738097  572141 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:12:11.916604  572141 system_pods.go:59] 7 kube-system pods found
	I1205 20:12:11.916633  572141 system_pods.go:61] "coredns-6d4b75cb6d-dr6cd" [0ca468a2-0ded-4c92-9a89-fe47b4401e47] Running
	I1205 20:12:11.916638  572141 system_pods.go:61] "etcd-test-preload-572068" [4c1dab2a-68be-496a-9d3e-2cf90cf76ced] Running
	I1205 20:12:11.916642  572141 system_pods.go:61] "kube-apiserver-test-preload-572068" [d48546a2-145c-4a15-84f6-d069a114f4c8] Running
	I1205 20:12:11.916645  572141 system_pods.go:61] "kube-controller-manager-test-preload-572068" [309c1e08-91c6-4c56-a22e-0e8905afb8f1] Running
	I1205 20:12:11.916652  572141 system_pods.go:61] "kube-proxy-tz9v5" [0a901a6f-740f-4706-867d-2876ede3881f] Running
	I1205 20:12:11.916656  572141 system_pods.go:61] "kube-scheduler-test-preload-572068" [28e088e2-dffc-47fa-ad66-6436a040f3b6] Running
	I1205 20:12:11.916659  572141 system_pods.go:61] "storage-provisioner" [7c2715f4-c680-44f6-b863-fc18ed405e93] Running
	I1205 20:12:11.916665  572141 system_pods.go:74] duration metric: took 178.562168ms to wait for pod list to return data ...
	I1205 20:12:11.916672  572141 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:12:12.115032  572141 default_sa.go:45] found service account: "default"
	I1205 20:12:12.115060  572141 default_sa.go:55] duration metric: took 198.38224ms for default service account to be created ...
	I1205 20:12:12.115070  572141 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:12:12.317322  572141 system_pods.go:86] 7 kube-system pods found
	I1205 20:12:12.317357  572141 system_pods.go:89] "coredns-6d4b75cb6d-dr6cd" [0ca468a2-0ded-4c92-9a89-fe47b4401e47] Running
	I1205 20:12:12.317363  572141 system_pods.go:89] "etcd-test-preload-572068" [4c1dab2a-68be-496a-9d3e-2cf90cf76ced] Running
	I1205 20:12:12.317367  572141 system_pods.go:89] "kube-apiserver-test-preload-572068" [d48546a2-145c-4a15-84f6-d069a114f4c8] Running
	I1205 20:12:12.317371  572141 system_pods.go:89] "kube-controller-manager-test-preload-572068" [309c1e08-91c6-4c56-a22e-0e8905afb8f1] Running
	I1205 20:12:12.317376  572141 system_pods.go:89] "kube-proxy-tz9v5" [0a901a6f-740f-4706-867d-2876ede3881f] Running
	I1205 20:12:12.317379  572141 system_pods.go:89] "kube-scheduler-test-preload-572068" [28e088e2-dffc-47fa-ad66-6436a040f3b6] Running
	I1205 20:12:12.317382  572141 system_pods.go:89] "storage-provisioner" [7c2715f4-c680-44f6-b863-fc18ed405e93] Running
	I1205 20:12:12.317390  572141 system_pods.go:126] duration metric: took 202.313133ms to wait for k8s-apps to be running ...
	I1205 20:12:12.317396  572141 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:12:12.317439  572141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:12:12.333449  572141 system_svc.go:56] duration metric: took 16.035131ms WaitForService to wait for kubelet
	I1205 20:12:12.333497  572141 kubeadm.go:582] duration metric: took 7.154692323s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:12:12.333519  572141 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:12:12.515747  572141 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:12:12.515774  572141 node_conditions.go:123] node cpu capacity is 2
	I1205 20:12:12.515784  572141 node_conditions.go:105] duration metric: took 182.258803ms to run NodePressure ...
	I1205 20:12:12.515796  572141 start.go:241] waiting for startup goroutines ...
	I1205 20:12:12.515805  572141 start.go:246] waiting for cluster config update ...
	I1205 20:12:12.515818  572141 start.go:255] writing updated cluster config ...
	I1205 20:12:12.516146  572141 ssh_runner.go:195] Run: rm -f paused
	I1205 20:12:12.566958  572141 start.go:600] kubectl: 1.31.3, cluster: 1.24.4 (minor skew: 7)
	I1205 20:12:12.569253  572141 out.go:201] 
	W1205 20:12:12.570878  572141 out.go:270] ! /usr/local/bin/kubectl is version 1.31.3, which may have incompatibilities with Kubernetes 1.24.4.
	I1205 20:12:12.572328  572141 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I1205 20:12:12.574243  572141 out.go:177] * Done! kubectl is now configured to use "test-preload-572068" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 20:12:13 test-preload-572068 crio[673]: time="2024-12-05 20:12:13.510081909Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733429533510058604,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6648536d-b54c-46bf-9a0d-a64e62bb5728 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:12:13 test-preload-572068 crio[673]: time="2024-12-05 20:12:13.510852792Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=680a09f3-c0e9-4d20-9acc-24d56d1b38c7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:12:13 test-preload-572068 crio[673]: time="2024-12-05 20:12:13.510927149Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=680a09f3-c0e9-4d20-9acc-24d56d1b38c7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:12:13 test-preload-572068 crio[673]: time="2024-12-05 20:12:13.511083311Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16a2f2d1e285cafe6d569508231565aa0fe661f6a87082237233ff67053b59df,PodSandboxId:e747aba3565ca78f9613dccd5cb3c6fc2794156b202ed9176efb1cd95b86bcfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733429527149343993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-dr6cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca468a2-0ded-4c92-9a89-fe47b4401e47,},Annotations:map[string]string{io.kubernetes.container.hash: 9fd5629e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cf59c1966ced2b90fd60c1f6b8702ea03a6c43c7657e4cdbb6e9e6c335ad081,PodSandboxId:4a1397ffcf6ebe498bb780675d6fb55bd4ad79b200600efa3cfd753ad077baf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733429519889770218,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tz9v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 0a901a6f-740f-4706-867d-2876ede3881f,},Annotations:map[string]string{io.kubernetes.container.hash: ac18df6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02fea551706b2cb8cd7eb683dcf9d3eef1e969cc7ec134bf86669778c0c3e9c1,PodSandboxId:6e2bf4c06f74cfd0627a6944580c0f30b7cf9057826529e4c119a270c87d6d24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733429519629491417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c
2715f4-c680-44f6-b863-fc18ed405e93,},Annotations:map[string]string{io.kubernetes.container.hash: 2568951f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca67577740e7a588da09982f5b778cd7e1d4254923897d60f2bf0f83e6ff5a,PodSandboxId:c6a70e43cd6c2a0953d6a65c75c08f0e1ef7586ba7ab9b04cdf80fe1921d8980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733429513713210738,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-572068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 778001d05
2720161dcad79d849eb81a5,},Annotations:map[string]string{io.kubernetes.container.hash: 84ec2f77,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e2f5b221c025bc7b55c55abb19bd0a1cc8f8062c8bc6a28751e7537420684c,PodSandboxId:542678afc796c79c9ca4b7aadc9fd0e2d40c3c1ba489a1b83d58256e5feee20d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733429513690012677,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-572068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 05a292e288ac1920b03a2dc2ff804ba7,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee5639d780a0f56618f4fe458162df79be3240001360d4e48422b6c7c61f448b,PodSandboxId:3246b125edfaad6190cf9ec12d024efc48a446a5e7bd06cf10571bac7d32abd3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733429513638020895,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-572068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 159f499397686b01114763f074136aa8,}
,Annotations:map[string]string{io.kubernetes.container.hash: da1f1eeb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd320380555343aa8f53ff4e1b814ab87f89855e27b6282a63b3ac701fcb441,PodSandboxId:68a467c8169406cecc11326233f077681e407d2cc8fe51f146832cafe27602e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733429513569381588,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-572068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4499820d942cc21cde92e8ba97140362,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=680a09f3-c0e9-4d20-9acc-24d56d1b38c7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:12:13 test-preload-572068 crio[673]: time="2024-12-05 20:12:13.548876590Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=00d85b92-1c50-4467-9670-fec546b3a483 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:12:13 test-preload-572068 crio[673]: time="2024-12-05 20:12:13.548955998Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=00d85b92-1c50-4467-9670-fec546b3a483 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:12:13 test-preload-572068 crio[673]: time="2024-12-05 20:12:13.549884631Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ba3721a8-7a6e-4493-8d66-41b87b170148 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:12:13 test-preload-572068 crio[673]: time="2024-12-05 20:12:13.550314835Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733429533550292291,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ba3721a8-7a6e-4493-8d66-41b87b170148 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:12:13 test-preload-572068 crio[673]: time="2024-12-05 20:12:13.550773607Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4fa0a41-ddb5-423e-954b-4dc87c8a8d63 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:12:13 test-preload-572068 crio[673]: time="2024-12-05 20:12:13.550823605Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4fa0a41-ddb5-423e-954b-4dc87c8a8d63 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:12:13 test-preload-572068 crio[673]: time="2024-12-05 20:12:13.550995533Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16a2f2d1e285cafe6d569508231565aa0fe661f6a87082237233ff67053b59df,PodSandboxId:e747aba3565ca78f9613dccd5cb3c6fc2794156b202ed9176efb1cd95b86bcfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733429527149343993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-dr6cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca468a2-0ded-4c92-9a89-fe47b4401e47,},Annotations:map[string]string{io.kubernetes.container.hash: 9fd5629e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cf59c1966ced2b90fd60c1f6b8702ea03a6c43c7657e4cdbb6e9e6c335ad081,PodSandboxId:4a1397ffcf6ebe498bb780675d6fb55bd4ad79b200600efa3cfd753ad077baf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733429519889770218,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tz9v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 0a901a6f-740f-4706-867d-2876ede3881f,},Annotations:map[string]string{io.kubernetes.container.hash: ac18df6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02fea551706b2cb8cd7eb683dcf9d3eef1e969cc7ec134bf86669778c0c3e9c1,PodSandboxId:6e2bf4c06f74cfd0627a6944580c0f30b7cf9057826529e4c119a270c87d6d24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733429519629491417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c
2715f4-c680-44f6-b863-fc18ed405e93,},Annotations:map[string]string{io.kubernetes.container.hash: 2568951f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca67577740e7a588da09982f5b778cd7e1d4254923897d60f2bf0f83e6ff5a,PodSandboxId:c6a70e43cd6c2a0953d6a65c75c08f0e1ef7586ba7ab9b04cdf80fe1921d8980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733429513713210738,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-572068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 778001d05
2720161dcad79d849eb81a5,},Annotations:map[string]string{io.kubernetes.container.hash: 84ec2f77,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e2f5b221c025bc7b55c55abb19bd0a1cc8f8062c8bc6a28751e7537420684c,PodSandboxId:542678afc796c79c9ca4b7aadc9fd0e2d40c3c1ba489a1b83d58256e5feee20d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733429513690012677,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-572068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 05a292e288ac1920b03a2dc2ff804ba7,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee5639d780a0f56618f4fe458162df79be3240001360d4e48422b6c7c61f448b,PodSandboxId:3246b125edfaad6190cf9ec12d024efc48a446a5e7bd06cf10571bac7d32abd3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733429513638020895,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-572068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 159f499397686b01114763f074136aa8,}
,Annotations:map[string]string{io.kubernetes.container.hash: da1f1eeb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd320380555343aa8f53ff4e1b814ab87f89855e27b6282a63b3ac701fcb441,PodSandboxId:68a467c8169406cecc11326233f077681e407d2cc8fe51f146832cafe27602e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733429513569381588,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-572068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4499820d942cc21cde92e8ba97140362,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b4fa0a41-ddb5-423e-954b-4dc87c8a8d63 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:12:13 test-preload-572068 crio[673]: time="2024-12-05 20:12:13.592881367Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5eaf044a-99d7-4cff-be6b-e541e6837ab3 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:12:13 test-preload-572068 crio[673]: time="2024-12-05 20:12:13.592975292Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5eaf044a-99d7-4cff-be6b-e541e6837ab3 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:12:13 test-preload-572068 crio[673]: time="2024-12-05 20:12:13.594381226Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1c0e88c3-c877-4a4a-9f7c-fdc8a154c907 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:12:13 test-preload-572068 crio[673]: time="2024-12-05 20:12:13.594960526Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733429533594938857,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1c0e88c3-c877-4a4a-9f7c-fdc8a154c907 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:12:13 test-preload-572068 crio[673]: time="2024-12-05 20:12:13.595734471Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e5e507a3-e494-4649-98b3-18ea7bb22949 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:12:13 test-preload-572068 crio[673]: time="2024-12-05 20:12:13.595821077Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e5e507a3-e494-4649-98b3-18ea7bb22949 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:12:13 test-preload-572068 crio[673]: time="2024-12-05 20:12:13.595993365Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16a2f2d1e285cafe6d569508231565aa0fe661f6a87082237233ff67053b59df,PodSandboxId:e747aba3565ca78f9613dccd5cb3c6fc2794156b202ed9176efb1cd95b86bcfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733429527149343993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-dr6cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca468a2-0ded-4c92-9a89-fe47b4401e47,},Annotations:map[string]string{io.kubernetes.container.hash: 9fd5629e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cf59c1966ced2b90fd60c1f6b8702ea03a6c43c7657e4cdbb6e9e6c335ad081,PodSandboxId:4a1397ffcf6ebe498bb780675d6fb55bd4ad79b200600efa3cfd753ad077baf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733429519889770218,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tz9v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 0a901a6f-740f-4706-867d-2876ede3881f,},Annotations:map[string]string{io.kubernetes.container.hash: ac18df6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02fea551706b2cb8cd7eb683dcf9d3eef1e969cc7ec134bf86669778c0c3e9c1,PodSandboxId:6e2bf4c06f74cfd0627a6944580c0f30b7cf9057826529e4c119a270c87d6d24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733429519629491417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c
2715f4-c680-44f6-b863-fc18ed405e93,},Annotations:map[string]string{io.kubernetes.container.hash: 2568951f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca67577740e7a588da09982f5b778cd7e1d4254923897d60f2bf0f83e6ff5a,PodSandboxId:c6a70e43cd6c2a0953d6a65c75c08f0e1ef7586ba7ab9b04cdf80fe1921d8980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733429513713210738,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-572068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 778001d05
2720161dcad79d849eb81a5,},Annotations:map[string]string{io.kubernetes.container.hash: 84ec2f77,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e2f5b221c025bc7b55c55abb19bd0a1cc8f8062c8bc6a28751e7537420684c,PodSandboxId:542678afc796c79c9ca4b7aadc9fd0e2d40c3c1ba489a1b83d58256e5feee20d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733429513690012677,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-572068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 05a292e288ac1920b03a2dc2ff804ba7,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee5639d780a0f56618f4fe458162df79be3240001360d4e48422b6c7c61f448b,PodSandboxId:3246b125edfaad6190cf9ec12d024efc48a446a5e7bd06cf10571bac7d32abd3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733429513638020895,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-572068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 159f499397686b01114763f074136aa8,}
,Annotations:map[string]string{io.kubernetes.container.hash: da1f1eeb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd320380555343aa8f53ff4e1b814ab87f89855e27b6282a63b3ac701fcb441,PodSandboxId:68a467c8169406cecc11326233f077681e407d2cc8fe51f146832cafe27602e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733429513569381588,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-572068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4499820d942cc21cde92e8ba97140362,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e5e507a3-e494-4649-98b3-18ea7bb22949 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:12:13 test-preload-572068 crio[673]: time="2024-12-05 20:12:13.631268545Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5d2d4087-0876-4b48-b2b7-1082bf7d1a27 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:12:13 test-preload-572068 crio[673]: time="2024-12-05 20:12:13.631359651Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5d2d4087-0876-4b48-b2b7-1082bf7d1a27 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:12:13 test-preload-572068 crio[673]: time="2024-12-05 20:12:13.632707758Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5d444fd5-9ecc-4def-b2e4-f21c8300fbf0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:12:13 test-preload-572068 crio[673]: time="2024-12-05 20:12:13.633141026Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733429533633117973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5d444fd5-9ecc-4def-b2e4-f21c8300fbf0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:12:13 test-preload-572068 crio[673]: time="2024-12-05 20:12:13.633703958Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa4063cd-5b94-42f2-8d49-429eaac509f2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:12:13 test-preload-572068 crio[673]: time="2024-12-05 20:12:13.633769449Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa4063cd-5b94-42f2-8d49-429eaac509f2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:12:13 test-preload-572068 crio[673]: time="2024-12-05 20:12:13.633924925Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16a2f2d1e285cafe6d569508231565aa0fe661f6a87082237233ff67053b59df,PodSandboxId:e747aba3565ca78f9613dccd5cb3c6fc2794156b202ed9176efb1cd95b86bcfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733429527149343993,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-dr6cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca468a2-0ded-4c92-9a89-fe47b4401e47,},Annotations:map[string]string{io.kubernetes.container.hash: 9fd5629e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cf59c1966ced2b90fd60c1f6b8702ea03a6c43c7657e4cdbb6e9e6c335ad081,PodSandboxId:4a1397ffcf6ebe498bb780675d6fb55bd4ad79b200600efa3cfd753ad077baf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733429519889770218,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tz9v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 0a901a6f-740f-4706-867d-2876ede3881f,},Annotations:map[string]string{io.kubernetes.container.hash: ac18df6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02fea551706b2cb8cd7eb683dcf9d3eef1e969cc7ec134bf86669778c0c3e9c1,PodSandboxId:6e2bf4c06f74cfd0627a6944580c0f30b7cf9057826529e4c119a270c87d6d24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733429519629491417,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c
2715f4-c680-44f6-b863-fc18ed405e93,},Annotations:map[string]string{io.kubernetes.container.hash: 2568951f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcca67577740e7a588da09982f5b778cd7e1d4254923897d60f2bf0f83e6ff5a,PodSandboxId:c6a70e43cd6c2a0953d6a65c75c08f0e1ef7586ba7ab9b04cdf80fe1921d8980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733429513713210738,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-572068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 778001d05
2720161dcad79d849eb81a5,},Annotations:map[string]string{io.kubernetes.container.hash: 84ec2f77,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e2f5b221c025bc7b55c55abb19bd0a1cc8f8062c8bc6a28751e7537420684c,PodSandboxId:542678afc796c79c9ca4b7aadc9fd0e2d40c3c1ba489a1b83d58256e5feee20d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733429513690012677,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-572068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 05a292e288ac1920b03a2dc2ff804ba7,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee5639d780a0f56618f4fe458162df79be3240001360d4e48422b6c7c61f448b,PodSandboxId:3246b125edfaad6190cf9ec12d024efc48a446a5e7bd06cf10571bac7d32abd3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733429513638020895,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-572068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 159f499397686b01114763f074136aa8,}
,Annotations:map[string]string{io.kubernetes.container.hash: da1f1eeb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd320380555343aa8f53ff4e1b814ab87f89855e27b6282a63b3ac701fcb441,PodSandboxId:68a467c8169406cecc11326233f077681e407d2cc8fe51f146832cafe27602e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733429513569381588,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-572068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4499820d942cc21cde92e8ba97140362,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aa4063cd-5b94-42f2-8d49-429eaac509f2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	16a2f2d1e285c       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   6 seconds ago       Running             coredns                   1                   e747aba3565ca       coredns-6d4b75cb6d-dr6cd
	8cf59c1966ced       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   13 seconds ago      Running             kube-proxy                1                   4a1397ffcf6eb       kube-proxy-tz9v5
	02fea551706b2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   6e2bf4c06f74c       storage-provisioner
	bcca67577740e       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   19 seconds ago      Running             kube-apiserver            1                   c6a70e43cd6c2       kube-apiserver-test-preload-572068
	34e2f5b221c02       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago      Running             kube-controller-manager   1                   542678afc796c       kube-controller-manager-test-preload-572068
	ee5639d780a0f       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   20 seconds ago      Running             etcd                      1                   3246b125edfaa       etcd-test-preload-572068
	8bd3203805553       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   20 seconds ago      Running             kube-scheduler            1                   68a467c816940       kube-scheduler-test-preload-572068
	
	
	==> coredns [16a2f2d1e285cafe6d569508231565aa0fe661f6a87082237233ff67053b59df] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:50610 - 53729 "HINFO IN 6031835001949184312.2450905881403355011. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017805364s
	
	
	==> describe nodes <==
	Name:               test-preload-572068
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-572068
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=test-preload-572068
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T20_10_34_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 20:10:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-572068
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 20:12:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 20:12:08 +0000   Thu, 05 Dec 2024 20:10:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 20:12:08 +0000   Thu, 05 Dec 2024 20:10:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 20:12:08 +0000   Thu, 05 Dec 2024 20:10:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 20:12:08 +0000   Thu, 05 Dec 2024 20:12:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.29
	  Hostname:    test-preload-572068
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7095cbb33186416b8f66d3db8f4616a2
	  System UUID:                7095cbb3-3186-416b-8f66-d3db8f4616a2
	  Boot ID:                    83c0aad3-1a82-45f0-a793-377f3ba5f8ac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-dr6cd                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     86s
	  kube-system                 etcd-test-preload-572068                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         99s
	  kube-system                 kube-apiserver-test-preload-572068             250m (12%)    0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-controller-manager-test-preload-572068    200m (10%)    0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-proxy-tz9v5                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-scheduler-test-preload-572068             100m (5%)     0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 13s                  kube-proxy       
	  Normal  Starting                 85s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  107s (x5 over 107s)  kubelet          Node test-preload-572068 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s (x5 over 107s)  kubelet          Node test-preload-572068 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s (x4 over 107s)  kubelet          Node test-preload-572068 status is now: NodeHasSufficientPID
	  Normal  Starting                 99s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  99s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  99s                  kubelet          Node test-preload-572068 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s                  kubelet          Node test-preload-572068 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s                  kubelet          Node test-preload-572068 status is now: NodeHasSufficientPID
	  Normal  NodeReady                89s                  kubelet          Node test-preload-572068 status is now: NodeReady
	  Normal  RegisteredNode           87s                  node-controller  Node test-preload-572068 event: Registered Node test-preload-572068 in Controller
	  Normal  Starting                 21s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20s (x8 over 21s)    kubelet          Node test-preload-572068 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 21s)    kubelet          Node test-preload-572068 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 21s)    kubelet          Node test-preload-572068 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s                   node-controller  Node test-preload-572068 event: Registered Node test-preload-572068 in Controller
	
	
	==> dmesg <==
	[Dec 5 20:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053041] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043795] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.940336] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.772516] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.608713] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.047540] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.062460] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055859] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.199288] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.111872] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.285027] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[ +13.618734] systemd-fstab-generator[994]: Ignoring "noauto" option for root device
	[  +0.063207] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.745534] systemd-fstab-generator[1123]: Ignoring "noauto" option for root device
	[  +6.105622] kauditd_printk_skb: 105 callbacks suppressed
	[Dec 5 20:12] systemd-fstab-generator[1738]: Ignoring "noauto" option for root device
	[  +0.093208] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.750142] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [ee5639d780a0f56618f4fe458162df79be3240001360d4e48422b6c7c61f448b] <==
	{"level":"info","ts":"2024-12-05T20:11:54.031Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"97e52954629f162b","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-12-05T20:11:54.040Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-05T20:11:54.043Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"97e52954629f162b","initial-advertise-peer-urls":["https://192.168.39.29:2380"],"listen-peer-urls":["https://192.168.39.29:2380"],"advertise-client-urls":["https://192.168.39.29:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.29:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-05T20:11:54.043Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-05T20:11:54.041Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-12-05T20:11:54.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b switched to configuration voters=(10945199911802443307)"}
	{"level":"info","ts":"2024-12-05T20:11:54.043Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f775b7b69fff5d11","local-member-id":"97e52954629f162b","added-peer-id":"97e52954629f162b","added-peer-peer-urls":["https://192.168.39.29:2380"]}
	{"level":"info","ts":"2024-12-05T20:11:54.042Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.29:2380"}
	{"level":"info","ts":"2024-12-05T20:11:54.044Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.29:2380"}
	{"level":"info","ts":"2024-12-05T20:11:54.045Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f775b7b69fff5d11","local-member-id":"97e52954629f162b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T20:11:54.045Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T20:11:55.920Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b is starting a new election at term 2"}
	{"level":"info","ts":"2024-12-05T20:11:55.920Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-12-05T20:11:55.920Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b received MsgPreVoteResp from 97e52954629f162b at term 2"}
	{"level":"info","ts":"2024-12-05T20:11:55.920Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b became candidate at term 3"}
	{"level":"info","ts":"2024-12-05T20:11:55.920Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b received MsgVoteResp from 97e52954629f162b at term 3"}
	{"level":"info","ts":"2024-12-05T20:11:55.920Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97e52954629f162b became leader at term 3"}
	{"level":"info","ts":"2024-12-05T20:11:55.920Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 97e52954629f162b elected leader 97e52954629f162b at term 3"}
	{"level":"info","ts":"2024-12-05T20:11:55.920Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"97e52954629f162b","local-member-attributes":"{Name:test-preload-572068 ClientURLs:[https://192.168.39.29:2379]}","request-path":"/0/members/97e52954629f162b/attributes","cluster-id":"f775b7b69fff5d11","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-05T20:11:55.921Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T20:11:55.923Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-05T20:11:55.923Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T20:11:55.924Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.29:2379"}
	{"level":"info","ts":"2024-12-05T20:11:55.924Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-05T20:11:55.924Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:12:13 up 0 min,  0 users,  load average: 0.66, 0.20, 0.07
	Linux test-preload-572068 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [bcca67577740e7a588da09982f5b778cd7e1d4254923897d60f2bf0f83e6ff5a] <==
	I1205 20:11:58.339173       1 establishing_controller.go:76] Starting EstablishingController
	I1205 20:11:58.339228       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1205 20:11:58.339268       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1205 20:11:58.339310       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1205 20:11:58.339365       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1205 20:11:58.362039       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1205 20:11:58.428050       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1205 20:11:58.430082       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1205 20:11:58.431032       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 20:11:58.442927       1 cache.go:39] Caches are synced for autoregister controller
	I1205 20:11:58.443105       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E1205 20:11:58.448349       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1205 20:11:58.482468       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1205 20:11:58.513912       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1205 20:11:58.520751       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1205 20:11:59.015954       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1205 20:11:59.321107       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 20:12:00.133715       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1205 20:12:00.146338       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1205 20:12:00.206042       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1205 20:12:00.232753       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 20:12:00.241139       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 20:12:00.288549       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1205 20:12:11.031007       1 controller.go:611] quota admission added evaluator for: endpoints
	I1205 20:12:11.081037       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [34e2f5b221c025bc7b55c55abb19bd0a1cc8f8062c8bc6a28751e7537420684c] <==
	I1205 20:12:10.878074       1 shared_informer.go:262] Caches are synced for ephemeral
	I1205 20:12:10.878110       1 shared_informer.go:262] Caches are synced for TTL
	I1205 20:12:10.880447       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I1205 20:12:10.881962       1 shared_informer.go:262] Caches are synced for taint
	I1205 20:12:10.882193       1 shared_informer.go:262] Caches are synced for HPA
	I1205 20:12:10.882704       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I1205 20:12:10.882750       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	I1205 20:12:10.883321       1 event.go:294] "Event occurred" object="test-preload-572068" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-572068 event: Registered Node test-preload-572068 in Controller"
	W1205 20:12:10.884916       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-572068. Assuming now as a timestamp.
	I1205 20:12:10.885017       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I1205 20:12:10.886374       1 shared_informer.go:262] Caches are synced for cronjob
	I1205 20:12:10.889271       1 shared_informer.go:262] Caches are synced for daemon sets
	I1205 20:12:10.891816       1 shared_informer.go:262] Caches are synced for expand
	I1205 20:12:10.893568       1 shared_informer.go:262] Caches are synced for endpoint
	I1205 20:12:10.895564       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1205 20:12:10.902715       1 shared_informer.go:262] Caches are synced for GC
	I1205 20:12:10.906878       1 shared_informer.go:262] Caches are synced for disruption
	I1205 20:12:10.907311       1 disruption.go:371] Sending events to api server.
	I1205 20:12:10.985905       1 shared_informer.go:262] Caches are synced for attach detach
	I1205 20:12:10.992925       1 shared_informer.go:262] Caches are synced for persistent volume
	I1205 20:12:11.046975       1 shared_informer.go:262] Caches are synced for resource quota
	I1205 20:12:11.087621       1 shared_informer.go:262] Caches are synced for resource quota
	I1205 20:12:11.522743       1 shared_informer.go:262] Caches are synced for garbage collector
	I1205 20:12:11.563368       1 shared_informer.go:262] Caches are synced for garbage collector
	I1205 20:12:11.563387       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [8cf59c1966ced2b90fd60c1f6b8702ea03a6c43c7657e4cdbb6e9e6c335ad081] <==
	I1205 20:12:00.245697       1 node.go:163] Successfully retrieved node IP: 192.168.39.29
	I1205 20:12:00.245916       1 server_others.go:138] "Detected node IP" address="192.168.39.29"
	I1205 20:12:00.246020       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1205 20:12:00.281942       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1205 20:12:00.281972       1 server_others.go:206] "Using iptables Proxier"
	I1205 20:12:00.282031       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1205 20:12:00.282551       1 server.go:661] "Version info" version="v1.24.4"
	I1205 20:12:00.282581       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:12:00.284122       1 config.go:317] "Starting service config controller"
	I1205 20:12:00.284172       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1205 20:12:00.284194       1 config.go:226] "Starting endpoint slice config controller"
	I1205 20:12:00.284217       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1205 20:12:00.285447       1 config.go:444] "Starting node config controller"
	I1205 20:12:00.285474       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1205 20:12:00.384709       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1205 20:12:00.384879       1 shared_informer.go:262] Caches are synced for service config
	I1205 20:12:00.385817       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [8bd320380555343aa8f53ff4e1b814ab87f89855e27b6282a63b3ac701fcb441] <==
	I1205 20:11:54.427096       1 serving.go:348] Generated self-signed cert in-memory
	W1205 20:11:58.384253       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1205 20:11:58.385200       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 20:11:58.385975       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1205 20:11:58.386034       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1205 20:11:58.456928       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I1205 20:11:58.457164       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:11:58.461427       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1205 20:11:58.461720       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 20:11:58.461946       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1205 20:11:58.461752       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1205 20:11:58.562743       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 05 20:11:58 test-preload-572068 kubelet[1130]: I1205 20:11:58.897694    1130 apiserver.go:52] "Watching apiserver"
	Dec 05 20:11:58 test-preload-572068 kubelet[1130]: I1205 20:11:58.903014    1130 topology_manager.go:200] "Topology Admit Handler"
	Dec 05 20:11:58 test-preload-572068 kubelet[1130]: I1205 20:11:58.903147    1130 topology_manager.go:200] "Topology Admit Handler"
	Dec 05 20:11:58 test-preload-572068 kubelet[1130]: I1205 20:11:58.903188    1130 topology_manager.go:200] "Topology Admit Handler"
	Dec 05 20:11:58 test-preload-572068 kubelet[1130]: I1205 20:11:58.903215    1130 topology_manager.go:200] "Topology Admit Handler"
	Dec 05 20:11:58 test-preload-572068 kubelet[1130]: E1205 20:11:58.904930    1130 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-dr6cd" podUID=0ca468a2-0ded-4c92-9a89-fe47b4401e47
	Dec 05 20:11:58 test-preload-572068 kubelet[1130]: I1205 20:11:58.956193    1130 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7c2715f4-c680-44f6-b863-fc18ed405e93-tmp\") pod \"storage-provisioner\" (UID: \"7c2715f4-c680-44f6-b863-fc18ed405e93\") " pod="kube-system/storage-provisioner"
	Dec 05 20:11:58 test-preload-572068 kubelet[1130]: I1205 20:11:58.956816    1130 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a901a6f-740f-4706-867d-2876ede3881f-xtables-lock\") pod \"kube-proxy-tz9v5\" (UID: \"0a901a6f-740f-4706-867d-2876ede3881f\") " pod="kube-system/kube-proxy-tz9v5"
	Dec 05 20:11:58 test-preload-572068 kubelet[1130]: I1205 20:11:58.956979    1130 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a901a6f-740f-4706-867d-2876ede3881f-lib-modules\") pod \"kube-proxy-tz9v5\" (UID: \"0a901a6f-740f-4706-867d-2876ede3881f\") " pod="kube-system/kube-proxy-tz9v5"
	Dec 05 20:11:58 test-preload-572068 kubelet[1130]: I1205 20:11:58.957188    1130 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jnlz\" (UniqueName: \"kubernetes.io/projected/0a901a6f-740f-4706-867d-2876ede3881f-kube-api-access-8jnlz\") pod \"kube-proxy-tz9v5\" (UID: \"0a901a6f-740f-4706-867d-2876ede3881f\") " pod="kube-system/kube-proxy-tz9v5"
	Dec 05 20:11:58 test-preload-572068 kubelet[1130]: I1205 20:11:58.957389    1130 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ca468a2-0ded-4c92-9a89-fe47b4401e47-config-volume\") pod \"coredns-6d4b75cb6d-dr6cd\" (UID: \"0ca468a2-0ded-4c92-9a89-fe47b4401e47\") " pod="kube-system/coredns-6d4b75cb6d-dr6cd"
	Dec 05 20:11:58 test-preload-572068 kubelet[1130]: I1205 20:11:58.957537    1130 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0a901a6f-740f-4706-867d-2876ede3881f-kube-proxy\") pod \"kube-proxy-tz9v5\" (UID: \"0a901a6f-740f-4706-867d-2876ede3881f\") " pod="kube-system/kube-proxy-tz9v5"
	Dec 05 20:11:58 test-preload-572068 kubelet[1130]: I1205 20:11:58.957726    1130 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79c2m\" (UniqueName: \"kubernetes.io/projected/7c2715f4-c680-44f6-b863-fc18ed405e93-kube-api-access-79c2m\") pod \"storage-provisioner\" (UID: \"7c2715f4-c680-44f6-b863-fc18ed405e93\") " pod="kube-system/storage-provisioner"
	Dec 05 20:11:58 test-preload-572068 kubelet[1130]: I1205 20:11:58.957872    1130 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pv8t\" (UniqueName: \"kubernetes.io/projected/0ca468a2-0ded-4c92-9a89-fe47b4401e47-kube-api-access-7pv8t\") pod \"coredns-6d4b75cb6d-dr6cd\" (UID: \"0ca468a2-0ded-4c92-9a89-fe47b4401e47\") " pod="kube-system/coredns-6d4b75cb6d-dr6cd"
	Dec 05 20:11:58 test-preload-572068 kubelet[1130]: I1205 20:11:58.958064    1130 reconciler.go:159] "Reconciler: start to sync state"
	Dec 05 20:11:59 test-preload-572068 kubelet[1130]: E1205 20:11:59.064471    1130 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 05 20:11:59 test-preload-572068 kubelet[1130]: E1205 20:11:59.064816    1130 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/0ca468a2-0ded-4c92-9a89-fe47b4401e47-config-volume podName:0ca468a2-0ded-4c92-9a89-fe47b4401e47 nodeName:}" failed. No retries permitted until 2024-12-05 20:11:59.564723736 +0000 UTC m=+6.813665747 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0ca468a2-0ded-4c92-9a89-fe47b4401e47-config-volume") pod "coredns-6d4b75cb6d-dr6cd" (UID: "0ca468a2-0ded-4c92-9a89-fe47b4401e47") : object "kube-system"/"coredns" not registered
	Dec 05 20:11:59 test-preload-572068 kubelet[1130]: E1205 20:11:59.568121    1130 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 05 20:11:59 test-preload-572068 kubelet[1130]: E1205 20:11:59.568204    1130 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/0ca468a2-0ded-4c92-9a89-fe47b4401e47-config-volume podName:0ca468a2-0ded-4c92-9a89-fe47b4401e47 nodeName:}" failed. No retries permitted until 2024-12-05 20:12:00.568189487 +0000 UTC m=+7.817131487 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0ca468a2-0ded-4c92-9a89-fe47b4401e47-config-volume") pod "coredns-6d4b75cb6d-dr6cd" (UID: "0ca468a2-0ded-4c92-9a89-fe47b4401e47") : object "kube-system"/"coredns" not registered
	Dec 05 20:12:00 test-preload-572068 kubelet[1130]: E1205 20:12:00.574703    1130 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 05 20:12:00 test-preload-572068 kubelet[1130]: E1205 20:12:00.574814    1130 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/0ca468a2-0ded-4c92-9a89-fe47b4401e47-config-volume podName:0ca468a2-0ded-4c92-9a89-fe47b4401e47 nodeName:}" failed. No retries permitted until 2024-12-05 20:12:02.574760955 +0000 UTC m=+9.823702967 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0ca468a2-0ded-4c92-9a89-fe47b4401e47-config-volume") pod "coredns-6d4b75cb6d-dr6cd" (UID: "0ca468a2-0ded-4c92-9a89-fe47b4401e47") : object "kube-system"/"coredns" not registered
	Dec 05 20:12:00 test-preload-572068 kubelet[1130]: E1205 20:12:00.989273    1130 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-dr6cd" podUID=0ca468a2-0ded-4c92-9a89-fe47b4401e47
	Dec 05 20:12:00 test-preload-572068 kubelet[1130]: I1205 20:12:00.995321    1130 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=ab1cb364-2003-43a0-9d37-2358668c44f0 path="/var/lib/kubelet/pods/ab1cb364-2003-43a0-9d37-2358668c44f0/volumes"
	Dec 05 20:12:02 test-preload-572068 kubelet[1130]: E1205 20:12:02.588024    1130 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 05 20:12:02 test-preload-572068 kubelet[1130]: E1205 20:12:02.588517    1130 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/0ca468a2-0ded-4c92-9a89-fe47b4401e47-config-volume podName:0ca468a2-0ded-4c92-9a89-fe47b4401e47 nodeName:}" failed. No retries permitted until 2024-12-05 20:12:06.588493845 +0000 UTC m=+13.837435857 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0ca468a2-0ded-4c92-9a89-fe47b4401e47-config-volume") pod "coredns-6d4b75cb6d-dr6cd" (UID: "0ca468a2-0ded-4c92-9a89-fe47b4401e47") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [02fea551706b2cb8cd7eb683dcf9d3eef1e969cc7ec134bf86669778c0c3e9c1] <==
	I1205 20:11:59.703343       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-572068 -n test-preload-572068
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-572068 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-572068" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-572068
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-572068: (1.153771802s)
--- FAIL: TestPreload (177.08s)

                                                
                                    
x
+
TestKubernetesUpgrade (443.92s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-886958 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-886958 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m30.65518668s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-886958] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20052
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-886958" primary control-plane node in "kubernetes-upgrade-886958" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:14:14.588832  573726 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:14:14.588973  573726 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:14:14.588984  573726 out.go:358] Setting ErrFile to fd 2...
	I1205 20:14:14.588990  573726 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:14:14.589246  573726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 20:14:14.589829  573726 out.go:352] Setting JSON to false
	I1205 20:14:14.590843  573726 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":10601,"bootTime":1733419054,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:14:14.590962  573726 start.go:139] virtualization: kvm guest
	I1205 20:14:14.593710  573726 out.go:177] * [kubernetes-upgrade-886958] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:14:14.595047  573726 notify.go:220] Checking for updates...
	I1205 20:14:14.597209  573726 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 20:14:14.599292  573726 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:14:14.601590  573726 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:14:14.603006  573726 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 20:14:14.604348  573726 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:14:14.605516  573726 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:14:14.606967  573726 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:14:14.644944  573726 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 20:14:14.646417  573726 start.go:297] selected driver: kvm2
	I1205 20:14:14.646435  573726 start.go:901] validating driver "kvm2" against <nil>
	I1205 20:14:14.646466  573726 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:14:14.647257  573726 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:14:14.647375  573726 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20052-530897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:14:14.663598  573726 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 20:14:14.663700  573726 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 20:14:14.664044  573726 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 20:14:14.664078  573726 cni.go:84] Creating CNI manager for ""
	I1205 20:14:14.664127  573726 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:14:14.664136  573726 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 20:14:14.664196  573726 start.go:340] cluster config:
	{Name:kubernetes-upgrade-886958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-886958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:14:14.664404  573726 iso.go:125] acquiring lock: {Name:mk778929df466edaca8cb6d38427acedfae32b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:14:14.666403  573726 out.go:177] * Starting "kubernetes-upgrade-886958" primary control-plane node in "kubernetes-upgrade-886958" cluster
	I1205 20:14:14.668053  573726 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 20:14:14.668115  573726 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1205 20:14:14.668136  573726 cache.go:56] Caching tarball of preloaded images
	I1205 20:14:14.668231  573726 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:14:14.668244  573726 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1205 20:14:14.668607  573726 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/config.json ...
	I1205 20:14:14.668636  573726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/config.json: {Name:mkd74f81ad411bbf51c06a0ccc9d63cdb1b83e3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:14:14.668782  573726 start.go:360] acquireMachinesLock for kubernetes-upgrade-886958: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:14:14.668817  573726 start.go:364] duration metric: took 15.456µs to acquireMachinesLock for "kubernetes-upgrade-886958"
	I1205 20:14:14.668833  573726 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-886958 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-886958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:14:14.668889  573726 start.go:125] createHost starting for "" (driver="kvm2")
	I1205 20:14:14.670564  573726 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 20:14:14.670704  573726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:14:14.670742  573726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:14:14.687102  573726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45789
	I1205 20:14:14.687513  573726 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:14:14.688141  573726 main.go:141] libmachine: Using API Version  1
	I1205 20:14:14.688164  573726 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:14:14.688548  573726 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:14:14.688764  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetMachineName
	I1205 20:14:14.688922  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .DriverName
	I1205 20:14:14.689136  573726 start.go:159] libmachine.API.Create for "kubernetes-upgrade-886958" (driver="kvm2")
	I1205 20:14:14.689164  573726 client.go:168] LocalClient.Create starting
	I1205 20:14:14.689203  573726 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem
	I1205 20:14:14.689246  573726 main.go:141] libmachine: Decoding PEM data...
	I1205 20:14:14.689268  573726 main.go:141] libmachine: Parsing certificate...
	I1205 20:14:14.689325  573726 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem
	I1205 20:14:14.689344  573726 main.go:141] libmachine: Decoding PEM data...
	I1205 20:14:14.689355  573726 main.go:141] libmachine: Parsing certificate...
	I1205 20:14:14.689372  573726 main.go:141] libmachine: Running pre-create checks...
	I1205 20:14:14.689381  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .PreCreateCheck
	I1205 20:14:14.689739  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetConfigRaw
	I1205 20:14:14.690125  573726 main.go:141] libmachine: Creating machine...
	I1205 20:14:14.690140  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .Create
	I1205 20:14:14.690276  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Creating KVM machine...
	I1205 20:14:14.691482  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | found existing default KVM network
	I1205 20:14:14.692193  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | I1205 20:14:14.692038  573766 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b60}
	I1205 20:14:14.692217  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | created network xml: 
	I1205 20:14:14.692227  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | <network>
	I1205 20:14:14.692235  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG |   <name>mk-kubernetes-upgrade-886958</name>
	I1205 20:14:14.692243  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG |   <dns enable='no'/>
	I1205 20:14:14.692253  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG |   
	I1205 20:14:14.692289  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1205 20:14:14.692312  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG |     <dhcp>
	I1205 20:14:14.692328  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1205 20:14:14.692338  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG |     </dhcp>
	I1205 20:14:14.692363  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG |   </ip>
	I1205 20:14:14.692373  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG |   
	I1205 20:14:14.692387  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | </network>
	I1205 20:14:14.692402  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | 
	I1205 20:14:14.697507  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | trying to create private KVM network mk-kubernetes-upgrade-886958 192.168.39.0/24...
	I1205 20:14:14.765766  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | private KVM network mk-kubernetes-upgrade-886958 192.168.39.0/24 created
	I1205 20:14:14.765798  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Setting up store path in /home/jenkins/minikube-integration/20052-530897/.minikube/machines/kubernetes-upgrade-886958 ...
	I1205 20:14:14.765817  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | I1205 20:14:14.765728  573766 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 20:14:14.765830  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Building disk image from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 20:14:14.765850  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Downloading /home/jenkins/minikube-integration/20052-530897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 20:14:15.054748  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | I1205 20:14:15.054606  573766 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/kubernetes-upgrade-886958/id_rsa...
	I1205 20:14:15.417350  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | I1205 20:14:15.417073  573766 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/kubernetes-upgrade-886958/kubernetes-upgrade-886958.rawdisk...
	I1205 20:14:15.417388  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | Writing magic tar header
	I1205 20:14:15.417589  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | Writing SSH key tar header
	I1205 20:14:15.417729  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | I1205 20:14:15.417661  573766 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/kubernetes-upgrade-886958 ...
	I1205 20:14:15.417855  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/kubernetes-upgrade-886958
	I1205 20:14:15.417876  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/kubernetes-upgrade-886958 (perms=drwx------)
	I1205 20:14:15.417895  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines
	I1205 20:14:15.417923  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 20:14:15.417940  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897
	I1205 20:14:15.417954  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines (perms=drwxr-xr-x)
	I1205 20:14:15.417965  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 20:14:15.417984  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | Checking permissions on dir: /home/jenkins
	I1205 20:14:15.417996  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | Checking permissions on dir: /home
	I1205 20:14:15.418011  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | Skipping /home - not owner
	I1205 20:14:15.418027  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube (perms=drwxr-xr-x)
	I1205 20:14:15.418039  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897 (perms=drwxrwxr-x)
	I1205 20:14:15.418093  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 20:14:15.418116  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 20:14:15.418128  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Creating domain...
	I1205 20:14:15.419361  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) define libvirt domain using xml: 
	I1205 20:14:15.419379  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) <domain type='kvm'>
	I1205 20:14:15.419386  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)   <name>kubernetes-upgrade-886958</name>
	I1205 20:14:15.419406  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)   <memory unit='MiB'>2200</memory>
	I1205 20:14:15.419442  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)   <vcpu>2</vcpu>
	I1205 20:14:15.419446  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)   <features>
	I1205 20:14:15.419451  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)     <acpi/>
	I1205 20:14:15.419455  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)     <apic/>
	I1205 20:14:15.419461  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)     <pae/>
	I1205 20:14:15.419470  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)     
	I1205 20:14:15.419475  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)   </features>
	I1205 20:14:15.419480  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)   <cpu mode='host-passthrough'>
	I1205 20:14:15.419485  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)   
	I1205 20:14:15.419489  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)   </cpu>
	I1205 20:14:15.419498  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)   <os>
	I1205 20:14:15.419502  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)     <type>hvm</type>
	I1205 20:14:15.419512  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)     <boot dev='cdrom'/>
	I1205 20:14:15.419519  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)     <boot dev='hd'/>
	I1205 20:14:15.419525  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)     <bootmenu enable='no'/>
	I1205 20:14:15.419532  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)   </os>
	I1205 20:14:15.419538  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)   <devices>
	I1205 20:14:15.419548  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)     <disk type='file' device='cdrom'>
	I1205 20:14:15.419586  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/kubernetes-upgrade-886958/boot2docker.iso'/>
	I1205 20:14:15.419611  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)       <target dev='hdc' bus='scsi'/>
	I1205 20:14:15.419645  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)       <readonly/>
	I1205 20:14:15.419666  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)     </disk>
	I1205 20:14:15.419678  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)     <disk type='file' device='disk'>
	I1205 20:14:15.419688  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 20:14:15.419708  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/kubernetes-upgrade-886958/kubernetes-upgrade-886958.rawdisk'/>
	I1205 20:14:15.419725  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)       <target dev='hda' bus='virtio'/>
	I1205 20:14:15.419733  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)     </disk>
	I1205 20:14:15.419740  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)     <interface type='network'>
	I1205 20:14:15.419752  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)       <source network='mk-kubernetes-upgrade-886958'/>
	I1205 20:14:15.419768  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)       <model type='virtio'/>
	I1205 20:14:15.419777  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)     </interface>
	I1205 20:14:15.419787  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)     <interface type='network'>
	I1205 20:14:15.419799  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)       <source network='default'/>
	I1205 20:14:15.419807  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)       <model type='virtio'/>
	I1205 20:14:15.419820  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)     </interface>
	I1205 20:14:15.419831  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)     <serial type='pty'>
	I1205 20:14:15.419846  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)       <target port='0'/>
	I1205 20:14:15.419860  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)     </serial>
	I1205 20:14:15.419871  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)     <console type='pty'>
	I1205 20:14:15.419884  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)       <target type='serial' port='0'/>
	I1205 20:14:15.419893  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)     </console>
	I1205 20:14:15.419900  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)     <rng model='virtio'>
	I1205 20:14:15.419914  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)       <backend model='random'>/dev/random</backend>
	I1205 20:14:15.419929  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)     </rng>
	I1205 20:14:15.419944  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)     
	I1205 20:14:15.419953  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)     
	I1205 20:14:15.419962  573726 main.go:141] libmachine: (kubernetes-upgrade-886958)   </devices>
	I1205 20:14:15.419969  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) </domain>
	I1205 20:14:15.419980  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) 
	I1205 20:14:15.425025  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:0e:4e:f8 in network default
	I1205 20:14:15.425824  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Ensuring networks are active...
	I1205 20:14:15.425845  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:15.426855  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Ensuring network default is active
	I1205 20:14:15.427196  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Ensuring network mk-kubernetes-upgrade-886958 is active
	I1205 20:14:15.427712  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Getting domain xml...
	I1205 20:14:15.428598  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Creating domain...
	I1205 20:14:16.784197  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Waiting to get IP...
	I1205 20:14:16.785254  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:16.785675  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | unable to find current IP address of domain kubernetes-upgrade-886958 in network mk-kubernetes-upgrade-886958
	I1205 20:14:16.785723  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | I1205 20:14:16.785671  573766 retry.go:31] will retry after 269.79816ms: waiting for machine to come up
	I1205 20:14:17.057333  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:17.057885  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | unable to find current IP address of domain kubernetes-upgrade-886958 in network mk-kubernetes-upgrade-886958
	I1205 20:14:17.057920  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | I1205 20:14:17.057835  573766 retry.go:31] will retry after 297.5966ms: waiting for machine to come up
	I1205 20:14:17.357324  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:17.357789  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | unable to find current IP address of domain kubernetes-upgrade-886958 in network mk-kubernetes-upgrade-886958
	I1205 20:14:17.357818  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | I1205 20:14:17.357738  573766 retry.go:31] will retry after 457.532657ms: waiting for machine to come up
	I1205 20:14:17.817527  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:17.818002  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | unable to find current IP address of domain kubernetes-upgrade-886958 in network mk-kubernetes-upgrade-886958
	I1205 20:14:17.818033  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | I1205 20:14:17.817976  573766 retry.go:31] will retry after 577.050254ms: waiting for machine to come up
	I1205 20:14:18.396379  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:18.396931  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | unable to find current IP address of domain kubernetes-upgrade-886958 in network mk-kubernetes-upgrade-886958
	I1205 20:14:18.396965  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | I1205 20:14:18.396851  573766 retry.go:31] will retry after 542.487217ms: waiting for machine to come up
	I1205 20:14:18.940597  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:18.940979  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | unable to find current IP address of domain kubernetes-upgrade-886958 in network mk-kubernetes-upgrade-886958
	I1205 20:14:18.941001  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | I1205 20:14:18.940954  573766 retry.go:31] will retry after 663.254618ms: waiting for machine to come up
	I1205 20:14:19.605610  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:19.606049  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | unable to find current IP address of domain kubernetes-upgrade-886958 in network mk-kubernetes-upgrade-886958
	I1205 20:14:19.606076  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | I1205 20:14:19.605973  573766 retry.go:31] will retry after 1.094108387s: waiting for machine to come up
	I1205 20:14:20.701462  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:20.701902  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | unable to find current IP address of domain kubernetes-upgrade-886958 in network mk-kubernetes-upgrade-886958
	I1205 20:14:20.701939  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | I1205 20:14:20.701861  573766 retry.go:31] will retry after 1.189018822s: waiting for machine to come up
	I1205 20:14:21.892191  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:21.892615  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | unable to find current IP address of domain kubernetes-upgrade-886958 in network mk-kubernetes-upgrade-886958
	I1205 20:14:21.892643  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | I1205 20:14:21.892559  573766 retry.go:31] will retry after 1.4169767s: waiting for machine to come up
	I1205 20:14:23.311004  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:23.311411  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | unable to find current IP address of domain kubernetes-upgrade-886958 in network mk-kubernetes-upgrade-886958
	I1205 20:14:23.311443  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | I1205 20:14:23.311362  573766 retry.go:31] will retry after 1.765885866s: waiting for machine to come up
	I1205 20:14:25.078583  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:25.079102  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | unable to find current IP address of domain kubernetes-upgrade-886958 in network mk-kubernetes-upgrade-886958
	I1205 20:14:25.079136  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | I1205 20:14:25.079028  573766 retry.go:31] will retry after 1.798653384s: waiting for machine to come up
	I1205 20:14:26.880072  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:26.880497  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | unable to find current IP address of domain kubernetes-upgrade-886958 in network mk-kubernetes-upgrade-886958
	I1205 20:14:26.880528  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | I1205 20:14:26.880443  573766 retry.go:31] will retry after 3.574416416s: waiting for machine to come up
	I1205 20:14:30.456100  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:30.456566  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | unable to find current IP address of domain kubernetes-upgrade-886958 in network mk-kubernetes-upgrade-886958
	I1205 20:14:30.456590  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | I1205 20:14:30.456522  573766 retry.go:31] will retry after 2.764178021s: waiting for machine to come up
	I1205 20:14:33.223828  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:33.224311  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | unable to find current IP address of domain kubernetes-upgrade-886958 in network mk-kubernetes-upgrade-886958
	I1205 20:14:33.224345  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | I1205 20:14:33.224249  573766 retry.go:31] will retry after 3.637057757s: waiting for machine to come up
	I1205 20:14:36.863739  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:36.864211  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Found IP for machine: 192.168.39.144
	I1205 20:14:36.864241  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has current primary IP address 192.168.39.144 and MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:36.864250  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Reserving static IP address...
	I1205 20:14:36.864676  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-886958", mac: "52:54:00:d3:f0:89", ip: "192.168.39.144"} in network mk-kubernetes-upgrade-886958
	I1205 20:14:36.943063  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | Getting to WaitForSSH function...
	I1205 20:14:36.943095  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Reserved static IP address: 192.168.39.144
	I1205 20:14:36.943108  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Waiting for SSH to be available...
	I1205 20:14:36.945855  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:36.946172  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f0:89", ip: ""} in network mk-kubernetes-upgrade-886958: {Iface:virbr1 ExpiryTime:2024-12-05 21:14:30 +0000 UTC Type:0 Mac:52:54:00:d3:f0:89 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d3:f0:89}
	I1205 20:14:36.946207  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined IP address 192.168.39.144 and MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:36.946397  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | Using SSH client type: external
	I1205 20:14:36.946427  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/kubernetes-upgrade-886958/id_rsa (-rw-------)
	I1205 20:14:36.946464  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/kubernetes-upgrade-886958/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:14:36.946478  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | About to run SSH command:
	I1205 20:14:36.946487  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | exit 0
	I1205 20:14:37.072406  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | SSH cmd err, output: <nil>: 
	I1205 20:14:37.072694  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) KVM machine creation complete!
	I1205 20:14:37.072990  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetConfigRaw
	I1205 20:14:37.073613  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .DriverName
	I1205 20:14:37.073805  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .DriverName
	I1205 20:14:37.073962  573726 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 20:14:37.073979  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetState
	I1205 20:14:37.075192  573726 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 20:14:37.075210  573726 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 20:14:37.075218  573726 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 20:14:37.075226  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHHostname
	I1205 20:14:37.077526  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:37.077856  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f0:89", ip: ""} in network mk-kubernetes-upgrade-886958: {Iface:virbr1 ExpiryTime:2024-12-05 21:14:30 +0000 UTC Type:0 Mac:52:54:00:d3:f0:89 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-886958 Clientid:01:52:54:00:d3:f0:89}
	I1205 20:14:37.077885  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined IP address 192.168.39.144 and MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:37.078051  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHPort
	I1205 20:14:37.078245  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHKeyPath
	I1205 20:14:37.078382  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHKeyPath
	I1205 20:14:37.078526  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHUsername
	I1205 20:14:37.078644  573726 main.go:141] libmachine: Using SSH client type: native
	I1205 20:14:37.078860  573726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I1205 20:14:37.078873  573726 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 20:14:37.183728  573726 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:14:37.183757  573726 main.go:141] libmachine: Detecting the provisioner...
	I1205 20:14:37.183765  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHHostname
	I1205 20:14:37.186984  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:37.187384  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f0:89", ip: ""} in network mk-kubernetes-upgrade-886958: {Iface:virbr1 ExpiryTime:2024-12-05 21:14:30 +0000 UTC Type:0 Mac:52:54:00:d3:f0:89 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-886958 Clientid:01:52:54:00:d3:f0:89}
	I1205 20:14:37.187418  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined IP address 192.168.39.144 and MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:37.187563  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHPort
	I1205 20:14:37.187818  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHKeyPath
	I1205 20:14:37.188061  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHKeyPath
	I1205 20:14:37.188232  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHUsername
	I1205 20:14:37.188425  573726 main.go:141] libmachine: Using SSH client type: native
	I1205 20:14:37.188608  573726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I1205 20:14:37.188620  573726 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 20:14:37.293300  573726 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 20:14:37.293438  573726 main.go:141] libmachine: found compatible host: buildroot
	I1205 20:14:37.293458  573726 main.go:141] libmachine: Provisioning with buildroot...
	I1205 20:14:37.293470  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetMachineName
	I1205 20:14:37.293745  573726 buildroot.go:166] provisioning hostname "kubernetes-upgrade-886958"
	I1205 20:14:37.293793  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetMachineName
	I1205 20:14:37.294016  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHHostname
	I1205 20:14:37.297184  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:37.297719  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f0:89", ip: ""} in network mk-kubernetes-upgrade-886958: {Iface:virbr1 ExpiryTime:2024-12-05 21:14:30 +0000 UTC Type:0 Mac:52:54:00:d3:f0:89 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-886958 Clientid:01:52:54:00:d3:f0:89}
	I1205 20:14:37.297752  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined IP address 192.168.39.144 and MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:37.297939  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHPort
	I1205 20:14:37.298147  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHKeyPath
	I1205 20:14:37.298338  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHKeyPath
	I1205 20:14:37.298518  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHUsername
	I1205 20:14:37.298718  573726 main.go:141] libmachine: Using SSH client type: native
	I1205 20:14:37.298955  573726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I1205 20:14:37.298974  573726 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-886958 && echo "kubernetes-upgrade-886958" | sudo tee /etc/hostname
	I1205 20:14:37.415060  573726 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-886958
	
	I1205 20:14:37.415093  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHHostname
	I1205 20:14:37.418033  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:37.418516  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f0:89", ip: ""} in network mk-kubernetes-upgrade-886958: {Iface:virbr1 ExpiryTime:2024-12-05 21:14:30 +0000 UTC Type:0 Mac:52:54:00:d3:f0:89 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-886958 Clientid:01:52:54:00:d3:f0:89}
	I1205 20:14:37.418551  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined IP address 192.168.39.144 and MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:37.418716  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHPort
	I1205 20:14:37.418946  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHKeyPath
	I1205 20:14:37.419119  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHKeyPath
	I1205 20:14:37.419264  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHUsername
	I1205 20:14:37.419436  573726 main.go:141] libmachine: Using SSH client type: native
	I1205 20:14:37.419648  573726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I1205 20:14:37.419666  573726 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-886958' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-886958/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-886958' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:14:37.534760  573726 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:14:37.534796  573726 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:14:37.534828  573726 buildroot.go:174] setting up certificates
	I1205 20:14:37.534841  573726 provision.go:84] configureAuth start
	I1205 20:14:37.534850  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetMachineName
	I1205 20:14:37.535144  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetIP
	I1205 20:14:37.538810  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:37.539240  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f0:89", ip: ""} in network mk-kubernetes-upgrade-886958: {Iface:virbr1 ExpiryTime:2024-12-05 21:14:30 +0000 UTC Type:0 Mac:52:54:00:d3:f0:89 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-886958 Clientid:01:52:54:00:d3:f0:89}
	I1205 20:14:37.539271  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined IP address 192.168.39.144 and MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:37.539414  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHHostname
	I1205 20:14:37.542311  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:37.542730  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f0:89", ip: ""} in network mk-kubernetes-upgrade-886958: {Iface:virbr1 ExpiryTime:2024-12-05 21:14:30 +0000 UTC Type:0 Mac:52:54:00:d3:f0:89 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-886958 Clientid:01:52:54:00:d3:f0:89}
	I1205 20:14:37.542760  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined IP address 192.168.39.144 and MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:37.542922  573726 provision.go:143] copyHostCerts
	I1205 20:14:37.543017  573726 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:14:37.543038  573726 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:14:37.543097  573726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:14:37.543221  573726 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:14:37.543229  573726 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:14:37.543259  573726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:14:37.543349  573726 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:14:37.543361  573726 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:14:37.543396  573726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:14:37.543474  573726 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-886958 san=[127.0.0.1 192.168.39.144 kubernetes-upgrade-886958 localhost minikube]
	I1205 20:14:37.627636  573726 provision.go:177] copyRemoteCerts
	I1205 20:14:37.627695  573726 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:14:37.627769  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHHostname
	I1205 20:14:37.630643  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:37.630921  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f0:89", ip: ""} in network mk-kubernetes-upgrade-886958: {Iface:virbr1 ExpiryTime:2024-12-05 21:14:30 +0000 UTC Type:0 Mac:52:54:00:d3:f0:89 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-886958 Clientid:01:52:54:00:d3:f0:89}
	I1205 20:14:37.630949  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined IP address 192.168.39.144 and MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:37.631100  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHPort
	I1205 20:14:37.631277  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHKeyPath
	I1205 20:14:37.631432  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHUsername
	I1205 20:14:37.631607  573726 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/kubernetes-upgrade-886958/id_rsa Username:docker}
	I1205 20:14:37.716210  573726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:14:37.742576  573726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1205 20:14:37.769394  573726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:14:37.795677  573726 provision.go:87] duration metric: took 260.818ms to configureAuth
	I1205 20:14:37.795718  573726 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:14:37.795949  573726 config.go:182] Loaded profile config "kubernetes-upgrade-886958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 20:14:37.796061  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHHostname
	I1205 20:14:37.798670  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:37.798990  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f0:89", ip: ""} in network mk-kubernetes-upgrade-886958: {Iface:virbr1 ExpiryTime:2024-12-05 21:14:30 +0000 UTC Type:0 Mac:52:54:00:d3:f0:89 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-886958 Clientid:01:52:54:00:d3:f0:89}
	I1205 20:14:37.799027  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined IP address 192.168.39.144 and MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:37.799168  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHPort
	I1205 20:14:37.799347  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHKeyPath
	I1205 20:14:37.799585  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHKeyPath
	I1205 20:14:37.799763  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHUsername
	I1205 20:14:37.799922  573726 main.go:141] libmachine: Using SSH client type: native
	I1205 20:14:37.800109  573726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I1205 20:14:37.800122  573726 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:14:38.032439  573726 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:14:38.032473  573726 main.go:141] libmachine: Checking connection to Docker...
	I1205 20:14:38.032503  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetURL
	I1205 20:14:38.033828  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | Using libvirt version 6000000
	I1205 20:14:38.036100  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:38.036489  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f0:89", ip: ""} in network mk-kubernetes-upgrade-886958: {Iface:virbr1 ExpiryTime:2024-12-05 21:14:30 +0000 UTC Type:0 Mac:52:54:00:d3:f0:89 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-886958 Clientid:01:52:54:00:d3:f0:89}
	I1205 20:14:38.036519  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined IP address 192.168.39.144 and MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:38.036672  573726 main.go:141] libmachine: Docker is up and running!
	I1205 20:14:38.036686  573726 main.go:141] libmachine: Reticulating splines...
	I1205 20:14:38.036695  573726 client.go:171] duration metric: took 23.34752049s to LocalClient.Create
	I1205 20:14:38.036726  573726 start.go:167] duration metric: took 23.347591066s to libmachine.API.Create "kubernetes-upgrade-886958"
	I1205 20:14:38.036740  573726 start.go:293] postStartSetup for "kubernetes-upgrade-886958" (driver="kvm2")
	I1205 20:14:38.036753  573726 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:14:38.036774  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .DriverName
	I1205 20:14:38.037060  573726 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:14:38.037113  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHHostname
	I1205 20:14:38.039457  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:38.039755  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f0:89", ip: ""} in network mk-kubernetes-upgrade-886958: {Iface:virbr1 ExpiryTime:2024-12-05 21:14:30 +0000 UTC Type:0 Mac:52:54:00:d3:f0:89 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-886958 Clientid:01:52:54:00:d3:f0:89}
	I1205 20:14:38.039803  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined IP address 192.168.39.144 and MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:38.039912  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHPort
	I1205 20:14:38.040194  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHKeyPath
	I1205 20:14:38.040388  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHUsername
	I1205 20:14:38.040681  573726 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/kubernetes-upgrade-886958/id_rsa Username:docker}
	I1205 20:14:38.123327  573726 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:14:38.127961  573726 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:14:38.128006  573726 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:14:38.128071  573726 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:14:38.128181  573726 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:14:38.128346  573726 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:14:38.137990  573726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:14:38.163477  573726 start.go:296] duration metric: took 126.718252ms for postStartSetup
	I1205 20:14:38.163546  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetConfigRaw
	I1205 20:14:38.164237  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetIP
	I1205 20:14:38.166868  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:38.167278  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f0:89", ip: ""} in network mk-kubernetes-upgrade-886958: {Iface:virbr1 ExpiryTime:2024-12-05 21:14:30 +0000 UTC Type:0 Mac:52:54:00:d3:f0:89 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-886958 Clientid:01:52:54:00:d3:f0:89}
	I1205 20:14:38.167311  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined IP address 192.168.39.144 and MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:38.167518  573726 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/config.json ...
	I1205 20:14:38.167758  573726 start.go:128] duration metric: took 23.498857378s to createHost
	I1205 20:14:38.167786  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHHostname
	I1205 20:14:38.170348  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:38.170671  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f0:89", ip: ""} in network mk-kubernetes-upgrade-886958: {Iface:virbr1 ExpiryTime:2024-12-05 21:14:30 +0000 UTC Type:0 Mac:52:54:00:d3:f0:89 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-886958 Clientid:01:52:54:00:d3:f0:89}
	I1205 20:14:38.170698  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined IP address 192.168.39.144 and MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:38.170907  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHPort
	I1205 20:14:38.171114  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHKeyPath
	I1205 20:14:38.171267  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHKeyPath
	I1205 20:14:38.171410  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHUsername
	I1205 20:14:38.171544  573726 main.go:141] libmachine: Using SSH client type: native
	I1205 20:14:38.171719  573726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I1205 20:14:38.171734  573726 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:14:38.277742  573726 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733429678.248360908
	
	I1205 20:14:38.277777  573726 fix.go:216] guest clock: 1733429678.248360908
	I1205 20:14:38.277785  573726 fix.go:229] Guest: 2024-12-05 20:14:38.248360908 +0000 UTC Remote: 2024-12-05 20:14:38.167773476 +0000 UTC m=+23.624106951 (delta=80.587432ms)
	I1205 20:14:38.277817  573726 fix.go:200] guest clock delta is within tolerance: 80.587432ms
	I1205 20:14:38.277830  573726 start.go:83] releasing machines lock for "kubernetes-upgrade-886958", held for 23.609004451s
	I1205 20:14:38.277864  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .DriverName
	I1205 20:14:38.278233  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetIP
	I1205 20:14:38.281443  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:38.281872  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f0:89", ip: ""} in network mk-kubernetes-upgrade-886958: {Iface:virbr1 ExpiryTime:2024-12-05 21:14:30 +0000 UTC Type:0 Mac:52:54:00:d3:f0:89 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-886958 Clientid:01:52:54:00:d3:f0:89}
	I1205 20:14:38.281907  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined IP address 192.168.39.144 and MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:38.282063  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .DriverName
	I1205 20:14:38.282634  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .DriverName
	I1205 20:14:38.282879  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .DriverName
	I1205 20:14:38.283004  573726 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:14:38.283047  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHHostname
	I1205 20:14:38.283187  573726 ssh_runner.go:195] Run: cat /version.json
	I1205 20:14:38.283257  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHHostname
	I1205 20:14:38.285992  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:38.286079  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:38.286424  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f0:89", ip: ""} in network mk-kubernetes-upgrade-886958: {Iface:virbr1 ExpiryTime:2024-12-05 21:14:30 +0000 UTC Type:0 Mac:52:54:00:d3:f0:89 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-886958 Clientid:01:52:54:00:d3:f0:89}
	I1205 20:14:38.286453  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f0:89", ip: ""} in network mk-kubernetes-upgrade-886958: {Iface:virbr1 ExpiryTime:2024-12-05 21:14:30 +0000 UTC Type:0 Mac:52:54:00:d3:f0:89 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-886958 Clientid:01:52:54:00:d3:f0:89}
	I1205 20:14:38.286485  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined IP address 192.168.39.144 and MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:38.286713  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined IP address 192.168.39.144 and MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:38.286778  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHPort
	I1205 20:14:38.286991  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHKeyPath
	I1205 20:14:38.287004  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHPort
	I1205 20:14:38.287142  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHUsername
	I1205 20:14:38.287224  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHKeyPath
	I1205 20:14:38.287306  573726 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/kubernetes-upgrade-886958/id_rsa Username:docker}
	I1205 20:14:38.287345  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHUsername
	I1205 20:14:38.287482  573726 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/kubernetes-upgrade-886958/id_rsa Username:docker}
	I1205 20:14:38.397088  573726 ssh_runner.go:195] Run: systemctl --version
	I1205 20:14:38.406521  573726 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:14:38.578824  573726 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:14:38.586641  573726 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:14:38.586738  573726 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:14:38.605842  573726 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:14:38.605866  573726 start.go:495] detecting cgroup driver to use...
	I1205 20:14:38.605937  573726 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:14:38.624388  573726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:14:38.639830  573726 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:14:38.639901  573726 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:14:38.654621  573726 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:14:38.669306  573726 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:14:38.784242  573726 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:14:38.946849  573726 docker.go:233] disabling docker service ...
	I1205 20:14:38.946933  573726 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:14:38.962083  573726 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:14:38.976149  573726 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:14:39.105072  573726 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:14:39.221279  573726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:14:39.235819  573726 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:14:39.255312  573726 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1205 20:14:39.255376  573726 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:14:39.268314  573726 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:14:39.268404  573726 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:14:39.281105  573726 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:14:39.292757  573726 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:14:39.306034  573726 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:14:39.319466  573726 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:14:39.329902  573726 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:14:39.329988  573726 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:14:39.344087  573726 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:14:39.359518  573726 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:14:39.482996  573726 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:14:39.587687  573726 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:14:39.587765  573726 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:14:39.594642  573726 start.go:563] Will wait 60s for crictl version
	I1205 20:14:39.594699  573726 ssh_runner.go:195] Run: which crictl
	I1205 20:14:39.599595  573726 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:14:39.651597  573726 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:14:39.651693  573726 ssh_runner.go:195] Run: crio --version
	I1205 20:14:39.682396  573726 ssh_runner.go:195] Run: crio --version
	I1205 20:14:39.717493  573726 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1205 20:14:39.718776  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetIP
	I1205 20:14:39.722141  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:39.722569  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f0:89", ip: ""} in network mk-kubernetes-upgrade-886958: {Iface:virbr1 ExpiryTime:2024-12-05 21:14:30 +0000 UTC Type:0 Mac:52:54:00:d3:f0:89 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-886958 Clientid:01:52:54:00:d3:f0:89}
	I1205 20:14:39.722605  573726 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined IP address 192.168.39.144 and MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:14:39.722837  573726 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:14:39.727271  573726 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:14:39.741227  573726 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-886958 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-886958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:14:39.741362  573726 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 20:14:39.741409  573726 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:14:39.784543  573726 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 20:14:39.784643  573726 ssh_runner.go:195] Run: which lz4
	I1205 20:14:39.789048  573726 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 20:14:39.793669  573726 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:14:39.793709  573726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1205 20:14:41.644501  573726 crio.go:462] duration metric: took 1.855485459s to copy over tarball
	I1205 20:14:41.644598  573726 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:14:44.402640  573726 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.75800626s)
	I1205 20:14:44.402686  573726 crio.go:469] duration metric: took 2.758145576s to extract the tarball
	I1205 20:14:44.402698  573726 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:14:44.448970  573726 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:14:44.506106  573726 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 20:14:44.506143  573726 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 20:14:44.506240  573726 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:14:44.506313  573726 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:14:44.506313  573726 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1205 20:14:44.506330  573726 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:14:44.506365  573726 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:14:44.506250  573726 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:14:44.506243  573726 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:14:44.506272  573726 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1205 20:14:44.507923  573726 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1205 20:14:44.507933  573726 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:14:44.507925  573726 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1205 20:14:44.508029  573726 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:14:44.508064  573726 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:14:44.508049  573726 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:14:44.508080  573726 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:14:44.508072  573726 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:14:44.685697  573726 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1205 20:14:44.708666  573726 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1205 20:14:44.740644  573726 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:14:44.744632  573726 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1205 20:14:44.744691  573726 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1205 20:14:44.744744  573726 ssh_runner.go:195] Run: which crictl
	I1205 20:14:44.767575  573726 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1205 20:14:44.783869  573726 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:14:44.800041  573726 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1205 20:14:44.800093  573726 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1205 20:14:44.800143  573726 ssh_runner.go:195] Run: which crictl
	I1205 20:14:44.815785  573726 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:14:44.838229  573726 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1205 20:14:44.838296  573726 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:14:44.838345  573726 ssh_runner.go:195] Run: which crictl
	I1205 20:14:44.838349  573726 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 20:14:44.844190  573726 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:14:44.892376  573726 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1205 20:14:44.892427  573726 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:14:44.892470  573726 ssh_runner.go:195] Run: which crictl
	I1205 20:14:44.919179  573726 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1205 20:14:44.919227  573726 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:14:44.919281  573726 ssh_runner.go:195] Run: which crictl
	I1205 20:14:44.919297  573726 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 20:14:44.919325  573726 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1205 20:14:44.919363  573726 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:14:44.919422  573726 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:14:44.919425  573726 ssh_runner.go:195] Run: which crictl
	I1205 20:14:44.952867  573726 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 20:14:44.966970  573726 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1205 20:14:44.967029  573726 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 20:14:44.967038  573726 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:14:44.967093  573726 ssh_runner.go:195] Run: which crictl
	I1205 20:14:44.967117  573726 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:14:45.015235  573726 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 20:14:45.052075  573726 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:14:45.052112  573726 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:14:45.098833  573726 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:14:45.098981  573726 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:14:45.099192  573726 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 20:14:45.110621  573726 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 20:14:45.116408  573726 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 20:14:45.228159  573726 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:14:45.228195  573726 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:14:45.291867  573726 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:14:45.292954  573726 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:14:45.293059  573726 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1205 20:14:45.312975  573726 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 20:14:45.313046  573726 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1205 20:14:45.317657  573726 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1205 20:14:45.318414  573726 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:14:45.357614  573726 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:14:45.390985  573726 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1205 20:14:45.411259  573726 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1205 20:14:45.412523  573726 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1205 20:14:45.431369  573726 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1205 20:14:45.735629  573726 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:14:45.882054  573726 cache_images.go:92] duration metric: took 1.375886867s to LoadCachedImages
	W1205 20:14:45.882187  573726 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1205 20:14:45.882209  573726 kubeadm.go:934] updating node { 192.168.39.144 8443 v1.20.0 crio true true} ...
	I1205 20:14:45.882367  573726 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-886958 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-886958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:14:45.882462  573726 ssh_runner.go:195] Run: crio config
	I1205 20:14:45.930675  573726 cni.go:84] Creating CNI manager for ""
	I1205 20:14:45.930698  573726 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:14:45.930710  573726 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:14:45.930737  573726 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.144 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-886958 NodeName:kubernetes-upgrade-886958 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1205 20:14:45.930899  573726 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-886958"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:14:45.930979  573726 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1205 20:14:45.941769  573726 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:14:45.941854  573726 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:14:45.952244  573726 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I1205 20:14:45.971283  573726 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:14:45.990696  573726 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I1205 20:14:46.010006  573726 ssh_runner.go:195] Run: grep 192.168.39.144	control-plane.minikube.internal$ /etc/hosts
	I1205 20:14:46.014443  573726 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:14:46.027765  573726 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:14:46.157073  573726 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:14:46.178416  573726 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958 for IP: 192.168.39.144
	I1205 20:14:46.178445  573726 certs.go:194] generating shared ca certs ...
	I1205 20:14:46.178474  573726 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:14:46.178700  573726 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:14:46.178760  573726 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:14:46.178775  573726 certs.go:256] generating profile certs ...
	I1205 20:14:46.178851  573726 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/client.key
	I1205 20:14:46.178873  573726 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/client.crt with IP's: []
	I1205 20:14:46.377263  573726 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/client.crt ...
	I1205 20:14:46.377298  573726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/client.crt: {Name:mk78eaecb0d7bc077fe36b6b7657f2f3ac792914 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:14:46.377514  573726 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/client.key ...
	I1205 20:14:46.377546  573726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/client.key: {Name:mk067fa3f838ee4ff3c97701fd64a2baa3194110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:14:46.377659  573726 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/apiserver.key.0467c358
	I1205 20:14:46.377687  573726 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/apiserver.crt.0467c358 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.144]
	I1205 20:14:46.477948  573726 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/apiserver.crt.0467c358 ...
	I1205 20:14:46.477988  573726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/apiserver.crt.0467c358: {Name:mkacabcdd8af22c1c9773092ebf01de398d6a8b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:14:46.478162  573726 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/apiserver.key.0467c358 ...
	I1205 20:14:46.478175  573726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/apiserver.key.0467c358: {Name:mk22068c44991f289750a0593ed612449a427958 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:14:46.478252  573726 certs.go:381] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/apiserver.crt.0467c358 -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/apiserver.crt
	I1205 20:14:46.478352  573726 certs.go:385] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/apiserver.key.0467c358 -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/apiserver.key
	I1205 20:14:46.478415  573726 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/proxy-client.key
	I1205 20:14:46.478432  573726 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/proxy-client.crt with IP's: []
	I1205 20:14:46.650293  573726 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/proxy-client.crt ...
	I1205 20:14:46.650326  573726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/proxy-client.crt: {Name:mkb368f7e2d47ca5182ce55048c74113ea1c3d7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:14:46.650497  573726 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/proxy-client.key ...
	I1205 20:14:46.650511  573726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/proxy-client.key: {Name:mke2340f6f5e00109910d1520e52fd0acf732223 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:14:46.650677  573726 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:14:46.650715  573726 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:14:46.650726  573726 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:14:46.650748  573726 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:14:46.650771  573726 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:14:46.650794  573726 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:14:46.650830  573726 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:14:46.651533  573726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:14:46.681091  573726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:14:46.708911  573726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:14:46.740151  573726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:14:46.767075  573726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1205 20:14:46.793079  573726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:14:46.819099  573726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:14:46.845416  573726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:14:46.871736  573726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:14:46.898193  573726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:14:46.924491  573726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:14:46.954090  573726 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:14:46.974214  573726 ssh_runner.go:195] Run: openssl version
	I1205 20:14:46.980727  573726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:14:46.993310  573726 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:14:46.998428  573726 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:14:46.998494  573726 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:14:47.004719  573726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:14:47.019589  573726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:14:47.035544  573726 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:14:47.040486  573726 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:14:47.040542  573726 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:14:47.047990  573726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:14:47.060396  573726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:14:47.074935  573726 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:14:47.083626  573726 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:14:47.083707  573726 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:14:47.095173  573726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:14:47.117383  573726 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:14:47.131882  573726 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 20:14:47.131951  573726 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-886958 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-886958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:14:47.132072  573726 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:14:47.132165  573726 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:14:47.176348  573726 cri.go:89] found id: ""
	I1205 20:14:47.176430  573726 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:14:47.186906  573726 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:14:47.197405  573726 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:14:47.207562  573726 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:14:47.207588  573726 kubeadm.go:157] found existing configuration files:
	
	I1205 20:14:47.207644  573726 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:14:47.217588  573726 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:14:47.217675  573726 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:14:47.228082  573726 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:14:47.238728  573726 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:14:47.238797  573726 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:14:47.249456  573726 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:14:47.259584  573726 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:14:47.259655  573726 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:14:47.270337  573726 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:14:47.281097  573726 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:14:47.281191  573726 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:14:47.291735  573726 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:14:47.404787  573726 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 20:14:47.404897  573726 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:14:47.573552  573726 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:14:47.573719  573726 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:14:47.573847  573726 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:14:47.772766  573726 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:14:47.774940  573726 out.go:235]   - Generating certificates and keys ...
	I1205 20:14:47.775041  573726 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:14:47.775167  573726 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:14:47.894908  573726 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 20:14:47.968954  573726 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1205 20:14:48.214370  573726 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1205 20:14:48.618100  573726 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1205 20:14:48.852658  573726 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1205 20:14:48.852894  573726 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-886958 localhost] and IPs [192.168.39.144 127.0.0.1 ::1]
	I1205 20:14:49.066012  573726 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1205 20:14:49.066287  573726 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-886958 localhost] and IPs [192.168.39.144 127.0.0.1 ::1]
	I1205 20:14:49.192521  573726 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 20:14:49.271345  573726 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 20:14:49.410554  573726 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1205 20:14:49.410898  573726 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:14:49.976448  573726 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:14:50.059974  573726 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:14:50.140783  573726 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:14:50.281289  573726 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:14:50.301201  573726 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:14:50.302343  573726 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:14:50.302426  573726 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:14:50.431331  573726 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:14:50.433392  573726 out.go:235]   - Booting up control plane ...
	I1205 20:14:50.433549  573726 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:14:50.439448  573726 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:14:50.440836  573726 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:14:50.443723  573726 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:14:50.449495  573726 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:15:30.441080  573726 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 20:15:30.441988  573726 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:15:30.442224  573726 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:15:35.442703  573726 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:15:35.442993  573726 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:15:45.442316  573726 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:15:45.442544  573726 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:16:05.442417  573726 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:16:05.442666  573726 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:16:45.444075  573726 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:16:45.444406  573726 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:16:45.444432  573726 kubeadm.go:310] 
	I1205 20:16:45.444497  573726 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 20:16:45.444570  573726 kubeadm.go:310] 		timed out waiting for the condition
	I1205 20:16:45.444592  573726 kubeadm.go:310] 
	I1205 20:16:45.444638  573726 kubeadm.go:310] 	This error is likely caused by:
	I1205 20:16:45.444686  573726 kubeadm.go:310] 		- The kubelet is not running
	I1205 20:16:45.444861  573726 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 20:16:45.444880  573726 kubeadm.go:310] 
	I1205 20:16:45.445024  573726 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 20:16:45.445081  573726 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 20:16:45.445132  573726 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 20:16:45.445142  573726 kubeadm.go:310] 
	I1205 20:16:45.445285  573726 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 20:16:45.445420  573726 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 20:16:45.445436  573726 kubeadm.go:310] 
	I1205 20:16:45.445574  573726 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 20:16:45.445701  573726 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 20:16:45.445812  573726 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 20:16:45.445923  573726 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 20:16:45.445942  573726 kubeadm.go:310] 
	I1205 20:16:45.446557  573726 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:16:45.446653  573726 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 20:16:45.446734  573726 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1205 20:16:45.446937  573726 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-886958 localhost] and IPs [192.168.39.144 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-886958 localhost] and IPs [192.168.39.144 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-886958 localhost] and IPs [192.168.39.144 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-886958 localhost] and IPs [192.168.39.144 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 20:16:45.446987  573726 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:16:47.820651  573726 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.373627987s)
	I1205 20:16:47.820748  573726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:16:47.841168  573726 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:16:47.855404  573726 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:16:47.855434  573726 kubeadm.go:157] found existing configuration files:
	
	I1205 20:16:47.855487  573726 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:16:47.868188  573726 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:16:47.868261  573726 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:16:47.880479  573726 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:16:47.891395  573726 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:16:47.891478  573726 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:16:47.902368  573726 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:16:47.913581  573726 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:16:47.913652  573726 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:16:47.925973  573726 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:16:47.937261  573726 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:16:47.937336  573726 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:16:47.948207  573726 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:16:48.201065  573726 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:18:44.238207  573726 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 20:18:44.238362  573726 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1205 20:18:44.240909  573726 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 20:18:44.240979  573726 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:18:44.241083  573726 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:18:44.241189  573726 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:18:44.241298  573726 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:18:44.241384  573726 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:18:44.319246  573726 out.go:235]   - Generating certificates and keys ...
	I1205 20:18:44.319405  573726 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:18:44.319486  573726 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:18:44.319589  573726 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:18:44.319714  573726 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:18:44.319820  573726 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:18:44.319890  573726 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 20:18:44.319967  573726 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:18:44.320036  573726 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:18:44.320132  573726 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:18:44.320221  573726 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:18:44.320281  573726 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 20:18:44.320359  573726 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:18:44.320444  573726 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:18:44.320526  573726 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:18:44.320617  573726 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:18:44.320692  573726 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:18:44.320783  573726 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:18:44.320907  573726 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:18:44.320978  573726 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:18:44.321079  573726 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:18:44.390501  573726 out.go:235]   - Booting up control plane ...
	I1205 20:18:44.390672  573726 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:18:44.390766  573726 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:18:44.390854  573726 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:18:44.390965  573726 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:18:44.391193  573726 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:18:44.391259  573726 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 20:18:44.391353  573726 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:18:44.391569  573726 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:18:44.391646  573726 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:18:44.391831  573726 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:18:44.391908  573726 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:18:44.392100  573726 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:18:44.392174  573726 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:18:44.392472  573726 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:18:44.392614  573726 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:18:44.392853  573726 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:18:44.392874  573726 kubeadm.go:310] 
	I1205 20:18:44.392939  573726 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 20:18:44.393016  573726 kubeadm.go:310] 		timed out waiting for the condition
	I1205 20:18:44.393035  573726 kubeadm.go:310] 
	I1205 20:18:44.393122  573726 kubeadm.go:310] 	This error is likely caused by:
	I1205 20:18:44.393183  573726 kubeadm.go:310] 		- The kubelet is not running
	I1205 20:18:44.393331  573726 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 20:18:44.393342  573726 kubeadm.go:310] 
	I1205 20:18:44.393480  573726 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 20:18:44.393535  573726 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 20:18:44.393580  573726 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 20:18:44.393594  573726 kubeadm.go:310] 
	I1205 20:18:44.393743  573726 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 20:18:44.393855  573726 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 20:18:44.393865  573726 kubeadm.go:310] 
	I1205 20:18:44.394025  573726 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 20:18:44.394166  573726 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 20:18:44.394280  573726 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 20:18:44.394389  573726 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 20:18:44.394473  573726 kubeadm.go:310] 
	I1205 20:18:44.394486  573726 kubeadm.go:394] duration metric: took 3m57.262541076s to StartCluster
	I1205 20:18:44.394538  573726 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:18:44.394614  573726 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:18:44.441916  573726 cri.go:89] found id: ""
	I1205 20:18:44.441956  573726 logs.go:282] 0 containers: []
	W1205 20:18:44.441969  573726 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:18:44.441979  573726 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:18:44.442050  573726 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:18:44.486837  573726 cri.go:89] found id: ""
	I1205 20:18:44.486876  573726 logs.go:282] 0 containers: []
	W1205 20:18:44.486890  573726 logs.go:284] No container was found matching "etcd"
	I1205 20:18:44.486899  573726 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:18:44.486969  573726 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:18:44.530211  573726 cri.go:89] found id: ""
	I1205 20:18:44.530261  573726 logs.go:282] 0 containers: []
	W1205 20:18:44.530275  573726 logs.go:284] No container was found matching "coredns"
	I1205 20:18:44.530285  573726 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:18:44.530367  573726 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:18:44.581801  573726 cri.go:89] found id: ""
	I1205 20:18:44.581840  573726 logs.go:282] 0 containers: []
	W1205 20:18:44.581852  573726 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:18:44.581861  573726 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:18:44.581927  573726 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:18:44.633933  573726 cri.go:89] found id: ""
	I1205 20:18:44.633975  573726 logs.go:282] 0 containers: []
	W1205 20:18:44.633991  573726 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:18:44.634001  573726 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:18:44.634106  573726 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:18:44.686460  573726 cri.go:89] found id: ""
	I1205 20:18:44.686495  573726 logs.go:282] 0 containers: []
	W1205 20:18:44.686507  573726 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:18:44.686516  573726 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:18:44.686590  573726 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:18:44.738690  573726 cri.go:89] found id: ""
	I1205 20:18:44.738729  573726 logs.go:282] 0 containers: []
	W1205 20:18:44.738743  573726 logs.go:284] No container was found matching "kindnet"
	I1205 20:18:44.738758  573726 logs.go:123] Gathering logs for kubelet ...
	I1205 20:18:44.738775  573726 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:18:44.816437  573726 logs.go:123] Gathering logs for dmesg ...
	I1205 20:18:44.816489  573726 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:18:44.836236  573726 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:18:44.836291  573726 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:18:44.978462  573726 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:18:44.978495  573726 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:18:44.978512  573726 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:18:45.128343  573726 logs.go:123] Gathering logs for container status ...
	I1205 20:18:45.128401  573726 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 20:18:45.179949  573726 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1205 20:18:45.180020  573726 out.go:270] * 
	* 
	W1205 20:18:45.180100  573726 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 20:18:45.180120  573726 out.go:270] * 
	* 
	W1205 20:18:45.181097  573726 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 20:18:45.184388  573726 out.go:201] 
	W1205 20:18:45.185751  573726 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 20:18:45.185792  573726 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 20:18:45.185813  573726 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 20:18:45.187414  573726 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-886958 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-886958
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-886958: (1.446246396s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-886958 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-886958 status --format={{.Host}}: exit status 7 (88.115188ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-886958 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-886958 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m36.438120488s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-886958 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-886958 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-886958 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (101.106591ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-886958] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20052
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-886958
	    minikube start -p kubernetes-upgrade-886958 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8869582 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-886958 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-886958 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-886958 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m11.543506444s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-12-05 20:21:34.94205501 +0000 UTC m=+4787.147675354
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-886958 -n kubernetes-upgrade-886958
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-886958 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-886958 logs -n 25: (1.871120044s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-383287 sudo cat              | cilium-383287             | jenkins | v1.34.0 | 05 Dec 24 20:18 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-383287 sudo cat              | cilium-383287             | jenkins | v1.34.0 | 05 Dec 24 20:18 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-383287 sudo                  | cilium-383287             | jenkins | v1.34.0 | 05 Dec 24 20:18 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-383287 sudo                  | cilium-383287             | jenkins | v1.34.0 | 05 Dec 24 20:18 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-383287 sudo                  | cilium-383287             | jenkins | v1.34.0 | 05 Dec 24 20:18 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-383287 sudo find             | cilium-383287             | jenkins | v1.34.0 | 05 Dec 24 20:18 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-383287 sudo crio             | cilium-383287             | jenkins | v1.34.0 | 05 Dec 24 20:18 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-383287                       | cilium-383287             | jenkins | v1.34.0 | 05 Dec 24 20:18 UTC | 05 Dec 24 20:18 UTC |
	| start   | -p force-systemd-env-801098            | force-systemd-env-801098  | jenkins | v1.34.0 | 05 Dec 24 20:18 UTC | 05 Dec 24 20:19 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-886958           | kubernetes-upgrade-886958 | jenkins | v1.34.0 | 05 Dec 24 20:18 UTC | 05 Dec 24 20:18 UTC |
	| start   | -p kubernetes-upgrade-886958           | kubernetes-upgrade-886958 | jenkins | v1.34.0 | 05 Dec 24 20:18 UTC | 05 Dec 24 20:20 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-130544 ssh cat      | force-systemd-flag-130544 | jenkins | v1.34.0 | 05 Dec 24 20:18 UTC | 05 Dec 24 20:18 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-130544           | force-systemd-flag-130544 | jenkins | v1.34.0 | 05 Dec 24 20:18 UTC | 05 Dec 24 20:18 UTC |
	| start   | -p cert-expiration-315387              | cert-expiration-315387    | jenkins | v1.34.0 | 05 Dec 24 20:18 UTC | 05 Dec 24 20:20 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-739327 sudo            | NoKubernetes-739327       | jenkins | v1.34.0 | 05 Dec 24 20:18 UTC |                     |
	|         | systemctl is-active --quiet            |                           |         |         |                     |                     |
	|         | service kubelet                        |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-739327                 | NoKubernetes-739327       | jenkins | v1.34.0 | 05 Dec 24 20:19 UTC | 05 Dec 24 20:19 UTC |
	| start   | -p cert-options-790679                 | cert-options-790679       | jenkins | v1.34.0 | 05 Dec 24 20:19 UTC | 05 Dec 24 20:21 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-801098            | force-systemd-env-801098  | jenkins | v1.34.0 | 05 Dec 24 20:19 UTC | 05 Dec 24 20:19 UTC |
	| start   | -p old-k8s-version-386085              | old-k8s-version-386085    | jenkins | v1.34.0 | 05 Dec 24 20:19 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --kvm-network=default                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                |                           |         |         |                     |                     |
	|         | --keep-context=false                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-886958           | kubernetes-upgrade-886958 | jenkins | v1.34.0 | 05 Dec 24 20:20 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-886958           | kubernetes-upgrade-886958 | jenkins | v1.34.0 | 05 Dec 24 20:20 UTC | 05 Dec 24 20:21 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | cert-options-790679 ssh                | cert-options-790679       | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:21 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-790679 -- sudo         | cert-options-790679       | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:21 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |         |                     |                     |
	| delete  | -p cert-options-790679                 | cert-options-790679       | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:21 UTC |
	| start   | -p no-preload-816185                   | no-preload-816185         | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2          |                           |         |         |                     |                     |
	|         |  --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2           |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 20:21:03
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:21:03.235478  582281 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:21:03.235608  582281 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:21:03.235626  582281 out.go:358] Setting ErrFile to fd 2...
	I1205 20:21:03.235633  582281 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:21:03.235837  582281 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 20:21:03.236545  582281 out.go:352] Setting JSON to false
	I1205 20:21:03.237651  582281 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":11009,"bootTime":1733419054,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:21:03.237759  582281 start.go:139] virtualization: kvm guest
	I1205 20:21:03.240291  582281 out.go:177] * [no-preload-816185] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:21:03.242460  582281 notify.go:220] Checking for updates...
	I1205 20:21:03.242473  582281 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 20:21:03.244010  582281 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:21:03.245621  582281 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:21:03.247206  582281 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 20:21:03.248794  582281 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:21:03.250447  582281 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:21:03.252605  582281 config.go:182] Loaded profile config "cert-expiration-315387": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:21:03.252729  582281 config.go:182] Loaded profile config "kubernetes-upgrade-886958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:21:03.252861  582281 config.go:182] Loaded profile config "old-k8s-version-386085": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 20:21:03.252988  582281 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:21:03.292191  582281 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 20:21:03.293643  582281 start.go:297] selected driver: kvm2
	I1205 20:21:03.293659  582281 start.go:901] validating driver "kvm2" against <nil>
	I1205 20:21:03.293675  582281 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:21:03.294405  582281 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:21:03.294501  582281 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20052-530897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:21:03.310727  582281 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 20:21:03.310798  582281 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 20:21:03.311145  582281 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:21:03.311200  582281 cni.go:84] Creating CNI manager for ""
	I1205 20:21:03.311281  582281 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:21:03.311299  582281 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 20:21:03.311361  582281 start.go:340] cluster config:
	{Name:no-preload-816185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-816185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:21:03.311497  582281 iso.go:125] acquiring lock: {Name:mk778929df466edaca8cb6d38427acedfae32b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:21:03.313474  582281 out.go:177] * Starting "no-preload-816185" primary control-plane node in "no-preload-816185" cluster
	I1205 20:20:59.715775  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:20:59.716340  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:20:59.716369  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:20:59.716305  581908 retry.go:31] will retry after 3.389498215s: waiting for machine to come up
	I1205 20:21:03.107995  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:03.108504  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:21:03.108556  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:21:03.108453  581908 retry.go:31] will retry after 4.383898803s: waiting for machine to come up
	I1205 20:21:03.315021  582281 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:21:03.315158  582281 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/config.json ...
	I1205 20:21:03.315194  582281 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/config.json: {Name:mk15842aa2a3575368799f90d0cf0d4083859391 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:21:03.315258  582281 cache.go:107] acquiring lock: {Name:mk50b69cec83462210630960a67178c99c3d82e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:21:03.315275  582281 cache.go:107] acquiring lock: {Name:mk1979b87ecdcddf026cae4d90474f4ce3a1ac17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:21:03.315328  582281 cache.go:107] acquiring lock: {Name:mk9dccbef77fe7a453f3944eaf36d764bf568663 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:21:03.315375  582281 cache.go:115] /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 20:21:03.315383  582281 cache.go:107] acquiring lock: {Name:mke2d566ae02ab6737609f1123a37a8604c34230 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:21:03.315408  582281 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 147.233µs
	I1205 20:21:03.315414  582281 start.go:360] acquireMachinesLock for no-preload-816185: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:21:03.315422  582281 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 20:21:03.315315  582281 cache.go:107] acquiring lock: {Name:mk89cc800c3ad08d7cec2b3c6f0136ab77570a06 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:21:03.315400  582281 cache.go:107] acquiring lock: {Name:mk98ff77a071f8d1bc7f505b7a23b85b59239525 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:21:03.315428  582281 cache.go:107] acquiring lock: {Name:mkcea757868d54e928fa47918e5670b04793c9ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:21:03.315460  582281 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:21:03.315477  582281 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:21:03.315520  582281 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:21:03.315583  582281 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:21:03.315619  582281 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1205 20:21:03.315603  582281 cache.go:107] acquiring lock: {Name:mk08945979ac8684ddc52d13bef0f284584bfa7c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:21:03.315640  582281 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1205 20:21:03.315799  582281 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:21:03.317006  582281 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1205 20:21:03.317019  582281 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1205 20:21:03.317001  582281 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:21:03.317008  582281 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:21:03.317073  582281 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:21:03.317116  582281 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:21:03.317211  582281 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:21:03.492493  582281 cache.go:162] opening:  /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1205 20:21:03.496988  582281 cache.go:162] opening:  /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1205 20:21:03.525380  582281 cache.go:162] opening:  /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1205 20:21:03.526303  582281 cache.go:162] opening:  /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I1205 20:21:03.570643  582281 cache.go:162] opening:  /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1205 20:21:03.573777  582281 cache.go:162] opening:  /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1205 20:21:03.606888  582281 cache.go:157] /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I1205 20:21:03.606915  582281 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 291.562594ms
	I1205 20:21:03.606930  582281 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I1205 20:21:03.670063  582281 cache.go:162] opening:  /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1205 20:21:03.968319  582281 cache.go:157] /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 exists
	I1205 20:21:03.968376  582281 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.2" -> "/home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2" took 652.808514ms
	I1205 20:21:03.968395  582281 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.2 -> /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 succeeded
	I1205 20:21:05.143696  582281 cache.go:157] /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1205 20:21:05.143795  582281 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 1.828411614s
	I1205 20:21:05.143829  582281 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1205 20:21:05.190008  582281 cache.go:157] /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 exists
	I1205 20:21:05.190035  582281 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.2" -> "/home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2" took 1.874775642s
	I1205 20:21:05.190048  582281 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.2 -> /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 succeeded
	I1205 20:21:05.208712  582281 cache.go:157] /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 exists
	I1205 20:21:05.208746  582281 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.2" -> "/home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2" took 1.893434106s
	I1205 20:21:05.208759  582281 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.2 -> /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 succeeded
	I1205 20:21:05.330904  582281 cache.go:157] /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 exists
	I1205 20:21:05.330932  582281 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.2" -> "/home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2" took 2.015652244s
	I1205 20:21:05.330945  582281 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.2 -> /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 succeeded
	I1205 20:21:05.481211  582281 cache.go:157] /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 exists
	I1205 20:21:05.481239  582281 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0" took 2.165856793s
	I1205 20:21:05.481252  582281 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1205 20:21:05.481270  582281 cache.go:87] Successfully saved all images to host disk.
	I1205 20:21:07.494584  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:07.495105  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has current primary IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:07.495134  581232 main.go:141] libmachine: (old-k8s-version-386085) Found IP for machine: 192.168.72.144
	I1205 20:21:07.495156  581232 main.go:141] libmachine: (old-k8s-version-386085) Reserving static IP address...
	I1205 20:21:07.495523  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-386085", mac: "52:54:00:6a:06:a4", ip: "192.168.72.144"} in network mk-old-k8s-version-386085
	I1205 20:21:07.574494  581232 main.go:141] libmachine: (old-k8s-version-386085) Reserved static IP address: 192.168.72.144
	I1205 20:21:07.574530  581232 main.go:141] libmachine: (old-k8s-version-386085) Waiting for SSH to be available...
	I1205 20:21:07.574539  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | Getting to WaitForSSH function...
	I1205 20:21:07.577431  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:07.577829  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:07.577864  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:07.577948  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | Using SSH client type: external
	I1205 20:21:07.577972  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa (-rw-------)
	I1205 20:21:07.578011  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:21:07.578024  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | About to run SSH command:
	I1205 20:21:07.578035  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | exit 0
	I1205 20:21:07.704757  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | SSH cmd err, output: <nil>: 
	I1205 20:21:07.705116  581232 main.go:141] libmachine: (old-k8s-version-386085) KVM machine creation complete!
	I1205 20:21:07.705414  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetConfigRaw
	I1205 20:21:07.706017  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:21:07.706260  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:21:07.706466  581232 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 20:21:07.706487  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetState
	I1205 20:21:07.707864  581232 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 20:21:07.707883  581232 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 20:21:07.707891  581232 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 20:21:07.707899  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:21:07.710311  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:07.710665  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:07.710694  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:07.710808  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:21:07.710999  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:07.711171  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:07.711297  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:21:07.711467  581232 main.go:141] libmachine: Using SSH client type: native
	I1205 20:21:07.711677  581232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:21:07.711693  581232 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 20:21:07.823658  581232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:21:07.823683  581232 main.go:141] libmachine: Detecting the provisioner...
	I1205 20:21:07.823704  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:21:07.826564  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:07.826917  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:07.826951  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:07.827059  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:21:07.827293  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:07.827462  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:07.827626  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:21:07.827767  581232 main.go:141] libmachine: Using SSH client type: native
	I1205 20:21:07.827949  581232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:21:07.827960  581232 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 20:21:07.937280  581232 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 20:21:07.937409  581232 main.go:141] libmachine: found compatible host: buildroot
	I1205 20:21:07.937422  581232 main.go:141] libmachine: Provisioning with buildroot...
	I1205 20:21:07.937431  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetMachineName
	I1205 20:21:07.937697  581232 buildroot.go:166] provisioning hostname "old-k8s-version-386085"
	I1205 20:21:07.937709  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetMachineName
	I1205 20:21:07.937878  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:21:07.940420  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:07.940734  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:07.940777  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:07.940883  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:21:07.941098  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:07.941242  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:07.941373  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:21:07.941512  581232 main.go:141] libmachine: Using SSH client type: native
	I1205 20:21:07.941695  581232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:21:07.941707  581232 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-386085 && echo "old-k8s-version-386085" | sudo tee /etc/hostname
	I1205 20:21:08.070277  581232 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-386085
	
	I1205 20:21:08.070339  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:21:08.073960  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:08.074511  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:08.074547  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:08.074763  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:21:08.075027  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:08.075266  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:08.075447  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:21:08.075697  581232 main.go:141] libmachine: Using SSH client type: native
	I1205 20:21:08.075900  581232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:21:08.075918  581232 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-386085' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-386085/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-386085' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:21:08.195853  581232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:21:08.195898  581232 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:21:08.196009  581232 buildroot.go:174] setting up certificates
	I1205 20:21:08.196032  581232 provision.go:84] configureAuth start
	I1205 20:21:08.196054  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetMachineName
	I1205 20:21:08.196382  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:21:08.199491  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:08.199778  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:08.199801  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:08.199958  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:21:08.202612  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:08.203010  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:08.203035  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:08.203257  581232 provision.go:143] copyHostCerts
	I1205 20:21:08.203337  581232 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:21:08.203362  581232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:21:08.203424  581232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:21:08.203539  581232 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:21:08.203550  581232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:21:08.203571  581232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:21:08.203637  581232 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:21:08.203645  581232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:21:08.203663  581232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:21:08.203723  581232 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-386085 san=[127.0.0.1 192.168.72.144 localhost minikube old-k8s-version-386085]
	I1205 20:21:08.616043  581232 provision.go:177] copyRemoteCerts
	I1205 20:21:08.616123  581232 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:21:08.616154  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:21:08.619707  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:08.620149  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:08.620177  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:08.620431  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:21:08.620682  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:08.620858  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:21:08.621024  581232 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:21:08.707529  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:21:08.735540  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1205 20:21:08.762216  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:21:08.787370  581232 provision.go:87] duration metric: took 591.317131ms to configureAuth
	I1205 20:21:08.787406  581232 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:21:08.787580  581232 config.go:182] Loaded profile config "old-k8s-version-386085": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 20:21:08.787674  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:21:08.790700  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:08.790984  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:08.791019  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:08.791168  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:21:08.791402  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:08.791575  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:08.791727  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:21:08.791918  581232 main.go:141] libmachine: Using SSH client type: native
	I1205 20:21:08.792153  581232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:21:08.792174  581232 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:21:09.285706  581730 start.go:364] duration metric: took 45.708728899s to acquireMachinesLock for "kubernetes-upgrade-886958"
	I1205 20:21:09.285794  581730 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:21:09.285807  581730 fix.go:54] fixHost starting: 
	I1205 20:21:09.286309  581730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:21:09.286368  581730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:21:09.304391  581730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46567
	I1205 20:21:09.304913  581730 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:21:09.305530  581730 main.go:141] libmachine: Using API Version  1
	I1205 20:21:09.305552  581730 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:21:09.305912  581730 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:21:09.306144  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .DriverName
	I1205 20:21:09.306303  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetState
	I1205 20:21:09.307915  581730 fix.go:112] recreateIfNeeded on kubernetes-upgrade-886958: state=Running err=<nil>
	W1205 20:21:09.307950  581730 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 20:21:09.310153  581730 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-886958" VM ...
	I1205 20:21:09.033410  581232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:21:09.033456  581232 main.go:141] libmachine: Checking connection to Docker...
	I1205 20:21:09.033470  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetURL
	I1205 20:21:09.034850  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | Using libvirt version 6000000
	I1205 20:21:09.037053  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:09.037381  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:09.037419  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:09.037664  581232 main.go:141] libmachine: Docker is up and running!
	I1205 20:21:09.037680  581232 main.go:141] libmachine: Reticulating splines...
	I1205 20:21:09.037688  581232 client.go:171] duration metric: took 25.94989327s to LocalClient.Create
	I1205 20:21:09.037710  581232 start.go:167] duration metric: took 25.94996229s to libmachine.API.Create "old-k8s-version-386085"
	I1205 20:21:09.037720  581232 start.go:293] postStartSetup for "old-k8s-version-386085" (driver="kvm2")
	I1205 20:21:09.037731  581232 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:21:09.037751  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:21:09.038012  581232 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:21:09.038040  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:21:09.040077  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:09.040435  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:09.040461  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:09.040671  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:21:09.040851  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:09.041004  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:21:09.041165  581232 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:21:09.123406  581232 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:21:09.134277  581232 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:21:09.134307  581232 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:21:09.134382  581232 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:21:09.134501  581232 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:21:09.134685  581232 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:21:09.145011  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:21:09.169202  581232 start.go:296] duration metric: took 131.464611ms for postStartSetup
	I1205 20:21:09.169267  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetConfigRaw
	I1205 20:21:09.169881  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:21:09.172535  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:09.172799  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:09.172824  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:09.173121  581232 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/config.json ...
	I1205 20:21:09.173329  581232 start.go:128] duration metric: took 26.107477694s to createHost
	I1205 20:21:09.173353  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:21:09.175715  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:09.176049  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:09.176083  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:09.176317  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:21:09.176509  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:09.176653  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:09.176792  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:21:09.176924  581232 main.go:141] libmachine: Using SSH client type: native
	I1205 20:21:09.177093  581232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:21:09.177103  581232 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:21:09.285520  581232 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430069.248530299
	
	I1205 20:21:09.285546  581232 fix.go:216] guest clock: 1733430069.248530299
	I1205 20:21:09.285555  581232 fix.go:229] Guest: 2024-12-05 20:21:09.248530299 +0000 UTC Remote: 2024-12-05 20:21:09.173342326 +0000 UTC m=+85.302458541 (delta=75.187973ms)
	I1205 20:21:09.285582  581232 fix.go:200] guest clock delta is within tolerance: 75.187973ms
	I1205 20:21:09.285589  581232 start.go:83] releasing machines lock for "old-k8s-version-386085", held for 26.21993585s
	I1205 20:21:09.285621  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:21:09.285923  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:21:09.289016  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:09.289485  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:09.289521  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:09.289797  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:21:09.290382  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:21:09.290592  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:21:09.290681  581232 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:21:09.290742  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:21:09.290850  581232 ssh_runner.go:195] Run: cat /version.json
	I1205 20:21:09.290879  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:21:09.293689  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:09.293863  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:09.294121  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:09.294150  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:09.294342  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:21:09.294348  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:09.294374  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:09.294533  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:09.294545  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:21:09.294714  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:21:09.294711  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:09.294905  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:21:09.294910  581232 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:21:09.295088  581232 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:21:09.374022  581232 ssh_runner.go:195] Run: systemctl --version
	I1205 20:21:09.404694  581232 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:21:09.567611  581232 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:21:09.574634  581232 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:21:09.574721  581232 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:21:09.593608  581232 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:21:09.593638  581232 start.go:495] detecting cgroup driver to use...
	I1205 20:21:09.593729  581232 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:21:09.611604  581232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:21:09.628223  581232 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:21:09.628332  581232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:21:09.643477  581232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:21:09.659106  581232 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:21:09.787468  581232 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:21:09.941819  581232 docker.go:233] disabling docker service ...
	I1205 20:21:09.941895  581232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:21:09.959710  581232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:21:09.974459  581232 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:21:10.133136  581232 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:21:10.292148  581232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:21:10.307485  581232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:21:10.329479  581232 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1205 20:21:10.329555  581232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:21:10.341416  581232 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:21:10.341504  581232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:21:10.353522  581232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:21:10.365470  581232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:21:10.377636  581232 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:21:10.390062  581232 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:21:10.400788  581232 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:21:10.400863  581232 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:21:10.415612  581232 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:21:10.426280  581232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:21:10.558914  581232 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:21:10.659230  581232 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:21:10.659325  581232 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:21:10.664562  581232 start.go:563] Will wait 60s for crictl version
	I1205 20:21:10.664642  581232 ssh_runner.go:195] Run: which crictl
	I1205 20:21:10.668647  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:21:10.709856  581232 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:21:10.709961  581232 ssh_runner.go:195] Run: crio --version
	I1205 20:21:10.740609  581232 ssh_runner.go:195] Run: crio --version
	I1205 20:21:10.771324  581232 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1205 20:21:09.311555  581730 machine.go:93] provisionDockerMachine start ...
	I1205 20:21:09.311595  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .DriverName
	I1205 20:21:09.311837  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHHostname
	I1205 20:21:09.314703  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:21:09.315246  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f0:89", ip: ""} in network mk-kubernetes-upgrade-886958: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:35 +0000 UTC Type:0 Mac:52:54:00:d3:f0:89 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-886958 Clientid:01:52:54:00:d3:f0:89}
	I1205 20:21:09.315284  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined IP address 192.168.39.144 and MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:21:09.315465  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHPort
	I1205 20:21:09.315666  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHKeyPath
	I1205 20:21:09.315827  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHKeyPath
	I1205 20:21:09.315973  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHUsername
	I1205 20:21:09.316258  581730 main.go:141] libmachine: Using SSH client type: native
	I1205 20:21:09.316533  581730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I1205 20:21:09.316549  581730 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:21:09.422388  581730 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-886958
	
	I1205 20:21:09.422422  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetMachineName
	I1205 20:21:09.422665  581730 buildroot.go:166] provisioning hostname "kubernetes-upgrade-886958"
	I1205 20:21:09.422699  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetMachineName
	I1205 20:21:09.422898  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHHostname
	I1205 20:21:09.425746  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:21:09.426106  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f0:89", ip: ""} in network mk-kubernetes-upgrade-886958: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:35 +0000 UTC Type:0 Mac:52:54:00:d3:f0:89 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-886958 Clientid:01:52:54:00:d3:f0:89}
	I1205 20:21:09.426142  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined IP address 192.168.39.144 and MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:21:09.426315  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHPort
	I1205 20:21:09.426507  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHKeyPath
	I1205 20:21:09.426636  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHKeyPath
	I1205 20:21:09.426783  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHUsername
	I1205 20:21:09.426989  581730 main.go:141] libmachine: Using SSH client type: native
	I1205 20:21:09.427213  581730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I1205 20:21:09.427227  581730 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-886958 && echo "kubernetes-upgrade-886958" | sudo tee /etc/hostname
	I1205 20:21:09.552209  581730 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-886958
	
	I1205 20:21:09.552242  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHHostname
	I1205 20:21:09.555157  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:21:09.555586  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f0:89", ip: ""} in network mk-kubernetes-upgrade-886958: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:35 +0000 UTC Type:0 Mac:52:54:00:d3:f0:89 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-886958 Clientid:01:52:54:00:d3:f0:89}
	I1205 20:21:09.555638  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined IP address 192.168.39.144 and MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:21:09.555838  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHPort
	I1205 20:21:09.556112  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHKeyPath
	I1205 20:21:09.556362  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHKeyPath
	I1205 20:21:09.556560  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHUsername
	I1205 20:21:09.556765  581730 main.go:141] libmachine: Using SSH client type: native
	I1205 20:21:09.557000  581730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I1205 20:21:09.557021  581730 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-886958' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-886958/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-886958' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:21:09.666520  581730 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:21:09.666557  581730 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:21:09.666583  581730 buildroot.go:174] setting up certificates
	I1205 20:21:09.666594  581730 provision.go:84] configureAuth start
	I1205 20:21:09.666604  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetMachineName
	I1205 20:21:09.666917  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetIP
	I1205 20:21:09.670017  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:21:09.670452  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f0:89", ip: ""} in network mk-kubernetes-upgrade-886958: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:35 +0000 UTC Type:0 Mac:52:54:00:d3:f0:89 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-886958 Clientid:01:52:54:00:d3:f0:89}
	I1205 20:21:09.670483  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined IP address 192.168.39.144 and MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:21:09.670680  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHHostname
	I1205 20:21:09.673198  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:21:09.673587  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f0:89", ip: ""} in network mk-kubernetes-upgrade-886958: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:35 +0000 UTC Type:0 Mac:52:54:00:d3:f0:89 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-886958 Clientid:01:52:54:00:d3:f0:89}
	I1205 20:21:09.673615  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined IP address 192.168.39.144 and MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:21:09.673759  581730 provision.go:143] copyHostCerts
	I1205 20:21:09.673819  581730 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:21:09.673841  581730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:21:09.673895  581730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:21:09.673995  581730 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:21:09.674003  581730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:21:09.674022  581730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:21:09.674118  581730 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:21:09.674125  581730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:21:09.674142  581730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:21:09.674207  581730 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-886958 san=[127.0.0.1 192.168.39.144 kubernetes-upgrade-886958 localhost minikube]
	I1205 20:21:09.931455  581730 provision.go:177] copyRemoteCerts
	I1205 20:21:09.931518  581730 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:21:09.931557  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHHostname
	I1205 20:21:09.934904  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:21:09.935301  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f0:89", ip: ""} in network mk-kubernetes-upgrade-886958: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:35 +0000 UTC Type:0 Mac:52:54:00:d3:f0:89 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-886958 Clientid:01:52:54:00:d3:f0:89}
	I1205 20:21:09.935340  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined IP address 192.168.39.144 and MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:21:09.935502  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHPort
	I1205 20:21:09.935743  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHKeyPath
	I1205 20:21:09.935892  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHUsername
	I1205 20:21:09.936018  581730 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/kubernetes-upgrade-886958/id_rsa Username:docker}
	I1205 20:21:10.021057  581730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:21:10.054027  581730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1205 20:21:10.099138  581730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:21:10.126230  581730 provision.go:87] duration metric: took 459.617448ms to configureAuth
	I1205 20:21:10.126273  581730 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:21:10.126511  581730 config.go:182] Loaded profile config "kubernetes-upgrade-886958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:21:10.126623  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHHostname
	I1205 20:21:10.130165  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:21:10.130609  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f0:89", ip: ""} in network mk-kubernetes-upgrade-886958: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:35 +0000 UTC Type:0 Mac:52:54:00:d3:f0:89 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-886958 Clientid:01:52:54:00:d3:f0:89}
	I1205 20:21:10.130642  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined IP address 192.168.39.144 and MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:21:10.130854  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHPort
	I1205 20:21:10.131071  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHKeyPath
	I1205 20:21:10.131300  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHKeyPath
	I1205 20:21:10.131497  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHUsername
	I1205 20:21:10.131710  581730 main.go:141] libmachine: Using SSH client type: native
	I1205 20:21:10.131948  581730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I1205 20:21:10.131992  581730 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:21:10.772795  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:21:10.775587  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:10.776018  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:10.776050  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:10.776296  581232 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1205 20:21:10.780713  581232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:21:10.794404  581232 kubeadm.go:883] updating cluster {Name:old-k8s-version-386085 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-386085 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:21:10.794523  581232 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 20:21:10.794581  581232 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:21:10.826595  581232 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 20:21:10.826669  581232 ssh_runner.go:195] Run: which lz4
	I1205 20:21:10.830855  581232 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 20:21:10.835196  581232 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:21:10.835253  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1205 20:21:12.603598  581232 crio.go:462] duration metric: took 1.772775692s to copy over tarball
	I1205 20:21:12.603701  581232 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:21:15.190371  581232 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.586625375s)
	I1205 20:21:15.190404  581232 crio.go:469] duration metric: took 2.586761927s to extract the tarball
	I1205 20:21:15.190415  581232 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:21:15.233842  581232 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:21:15.285647  581232 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 20:21:15.285696  581232 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 20:21:15.285798  581232 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:21:15.285817  581232 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:21:15.285850  581232 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1205 20:21:15.285877  581232 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:21:15.285854  581232 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1205 20:21:15.285943  581232 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:21:15.285946  581232 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:21:15.285817  581232 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:21:15.287809  581232 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1205 20:21:15.287858  581232 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:21:15.287897  581232 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:21:15.287981  581232 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:21:15.288060  581232 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1205 20:21:15.288128  581232 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:21:15.288054  581232 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:21:15.288333  581232 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:21:15.481573  581232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1205 20:21:15.481906  581232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:21:15.489619  581232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1205 20:21:15.502754  581232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:21:15.503222  581232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1205 20:21:15.533915  581232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:21:15.544213  581232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:21:15.613718  581232 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1205 20:21:15.613766  581232 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:21:15.613822  581232 ssh_runner.go:195] Run: which crictl
	I1205 20:21:15.613821  581232 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1205 20:21:15.613860  581232 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1205 20:21:15.613908  581232 ssh_runner.go:195] Run: which crictl
	I1205 20:21:15.677490  581232 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1205 20:21:15.677546  581232 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:21:15.677590  581232 ssh_runner.go:195] Run: which crictl
	I1205 20:21:15.677603  581232 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1205 20:21:15.677645  581232 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:21:15.677655  581232 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1205 20:21:15.677686  581232 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1205 20:21:15.677690  581232 ssh_runner.go:195] Run: which crictl
	I1205 20:21:15.677726  581232 ssh_runner.go:195] Run: which crictl
	I1205 20:21:15.696757  581232 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1205 20:21:15.696816  581232 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:21:15.696828  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:21:15.696840  581232 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1205 20:21:15.696842  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 20:21:15.696856  581232 ssh_runner.go:195] Run: which crictl
	I1205 20:21:15.696862  581232 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:21:15.696892  581232 ssh_runner.go:195] Run: which crictl
	I1205 20:21:15.696899  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 20:21:15.696908  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:21:15.696903  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 20:21:15.805128  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:21:15.805194  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 20:21:15.805194  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:21:15.828044  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:21:15.828069  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 20:21:15.828097  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 20:21:15.828122  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:21:15.953536  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:21:15.953615  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 20:21:15.953677  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:21:16.003327  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 20:21:16.003459  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 20:21:16.003498  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:21:16.003543  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:21:16.095042  581232 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1205 20:21:16.128437  581232 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1205 20:21:16.128597  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:21:16.168072  581232 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1205 20:21:16.168119  581232 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1205 20:21:16.168286  581232 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1205 20:21:16.168312  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:21:16.199436  581232 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1205 20:21:16.224905  581232 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1205 20:21:16.459721  581232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:21:16.602021  581232 cache_images.go:92] duration metric: took 1.316298413s to LoadCachedImages
	W1205 20:21:16.602121  581232 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I1205 20:21:16.602138  581232 kubeadm.go:934] updating node { 192.168.72.144 8443 v1.20.0 crio true true} ...
	I1205 20:21:16.602276  581232 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-386085 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386085 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:21:16.602366  581232 ssh_runner.go:195] Run: crio config
	I1205 20:21:16.664835  581232 cni.go:84] Creating CNI manager for ""
	I1205 20:21:16.664862  581232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:21:16.664872  581232 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:21:16.664896  581232 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.144 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-386085 NodeName:old-k8s-version-386085 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1205 20:21:16.665072  581232 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-386085"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:21:16.665144  581232 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1205 20:21:16.676017  581232 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:21:16.676085  581232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:21:16.686073  581232 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1205 20:21:16.705467  581232 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:21:16.723780  581232 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1205 20:21:16.742929  581232 ssh_runner.go:195] Run: grep 192.168.72.144	control-plane.minikube.internal$ /etc/hosts
	I1205 20:21:16.747590  581232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:21:16.761580  581232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:21:16.887750  581232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:21:16.907151  581232 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085 for IP: 192.168.72.144
	I1205 20:21:16.907191  581232 certs.go:194] generating shared ca certs ...
	I1205 20:21:16.907216  581232 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:21:16.907435  581232 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:21:16.907500  581232 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:21:16.907514  581232 certs.go:256] generating profile certs ...
	I1205 20:21:16.907581  581232 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.key
	I1205 20:21:16.907595  581232 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.crt with IP's: []
	I1205 20:21:17.059572  581232 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.crt ...
	I1205 20:21:17.059609  581232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.crt: {Name:mkb3552afa22200472d8cbab774aa7d1cfbbc38e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:21:17.059809  581232 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.key ...
	I1205 20:21:17.059831  581232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.key: {Name:mk4402cc2a008bc8b6e2d9e5c89265948fc7d161 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:21:17.059959  581232 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.key.87b35b18
	I1205 20:21:17.059988  581232 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.crt.87b35b18 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.144]
	I1205 20:21:17.283666  581232 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.crt.87b35b18 ...
	I1205 20:21:17.283710  581232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.crt.87b35b18: {Name:mk95e559c2ab4bdbf5838fd82bcdb5690297f040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:21:17.307051  581232 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.key.87b35b18 ...
	I1205 20:21:17.307129  581232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.key.87b35b18: {Name:mkcf0c9dfca85d1b074169b5536a3904e8d01895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:21:17.307304  581232 certs.go:381] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.crt.87b35b18 -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.crt
	I1205 20:21:17.307399  581232 certs.go:385] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.key.87b35b18 -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.key
	I1205 20:21:17.307467  581232 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.key
	I1205 20:21:17.307487  581232 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.crt with IP's: []
	I1205 20:21:17.754389  581232 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.crt ...
	I1205 20:21:17.754426  581232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.crt: {Name:mkfd0b51fca4395714f1ab65bfd9bca9985b097a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:21:17.754631  581232 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.key ...
	I1205 20:21:17.754654  581232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.key: {Name:mkbc9f8e3659a6cd377fafc79f8082dfc2d3efd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:21:17.754890  581232 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:21:17.754948  581232 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:21:17.754964  581232 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:21:17.754996  581232 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:21:17.755029  581232 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:21:17.755063  581232 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:21:17.755133  581232 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:21:17.755764  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:21:17.791982  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:21:17.821213  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:21:17.860922  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:21:17.889847  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1205 20:21:17.916460  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:21:17.943618  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:21:17.972693  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:21:18.017986  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:21:18.045526  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:21:18.075197  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:21:18.102963  581232 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:21:18.121793  581232 ssh_runner.go:195] Run: openssl version
	I1205 20:21:18.128680  581232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:21:18.141429  581232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:21:18.147547  581232 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:21:18.147631  581232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:21:18.154645  581232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:21:18.167345  581232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:21:18.179712  581232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:21:18.184996  581232 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:21:18.185070  581232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:21:18.191832  581232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:21:18.204037  581232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:21:18.216442  581232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:21:18.221666  581232 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:21:18.221745  581232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:21:18.228363  581232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:21:18.240097  581232 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:21:18.245323  581232 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 20:21:18.245395  581232 kubeadm.go:392] StartCluster: {Name:old-k8s-version-386085 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-386085 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:21:18.245512  581232 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:21:18.245580  581232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:21:18.293809  581232 cri.go:89] found id: ""
	I1205 20:21:18.293898  581232 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:21:18.306100  581232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:21:18.317203  581232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:21:18.328481  581232 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:21:18.328507  581232 kubeadm.go:157] found existing configuration files:
	
	I1205 20:21:18.328576  581232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:21:18.339187  581232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:21:18.339281  581232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:21:18.349982  581232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:21:18.360102  581232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:21:18.360185  581232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:21:18.370950  581232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:21:18.380781  581232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:21:18.380860  581232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:21:18.391326  581232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:21:18.401434  581232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:21:18.401506  581232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:21:18.412281  581232 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:21:18.538345  581232 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 20:21:18.538522  581232 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:21:18.711502  581232 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:21:18.711671  581232 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:21:18.711826  581232 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:21:18.939597  581232 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:21:19.562085  582281 start.go:364] duration metric: took 16.246630408s to acquireMachinesLock for "no-preload-816185"
	I1205 20:21:19.562167  582281 start.go:93] Provisioning new machine with config: &{Name:no-preload-816185 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:no-preload-816185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:21:19.562299  582281 start.go:125] createHost starting for "" (driver="kvm2")
	I1205 20:21:18.941541  581232 out.go:235]   - Generating certificates and keys ...
	I1205 20:21:18.941649  581232 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:21:18.941750  581232 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:21:19.209460  581232 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 20:21:19.828183  581232 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1205 20:21:20.001872  581232 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1205 20:21:20.184883  581232 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1205 20:21:20.484359  581232 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1205 20:21:20.484615  581232 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-386085] and IPs [192.168.72.144 127.0.0.1 ::1]
	I1205 20:21:20.596214  581232 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1205 20:21:20.596437  581232 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-386085] and IPs [192.168.72.144 127.0.0.1 ::1]
	I1205 20:21:20.863231  581232 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 20:21:21.167752  581232 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 20:21:21.260475  581232 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1205 20:21:21.260585  581232 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:21:21.617603  581232 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:21:21.682487  581232 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:21:21.851457  581232 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:21:22.102212  581232 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:21:22.121499  581232 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:21:22.123402  581232 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:21:22.123474  581232 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:21:22.332337  581232 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:21:19.564472  582281 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 20:21:19.564696  582281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:21:19.564805  582281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:21:19.582296  582281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46439
	I1205 20:21:19.582882  582281 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:21:19.583478  582281 main.go:141] libmachine: Using API Version  1
	I1205 20:21:19.583499  582281 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:21:19.583928  582281 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:21:19.584136  582281 main.go:141] libmachine: (no-preload-816185) Calling .GetMachineName
	I1205 20:21:19.584335  582281 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:21:19.584492  582281 start.go:159] libmachine.API.Create for "no-preload-816185" (driver="kvm2")
	I1205 20:21:19.584520  582281 client.go:168] LocalClient.Create starting
	I1205 20:21:19.584556  582281 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem
	I1205 20:21:19.584604  582281 main.go:141] libmachine: Decoding PEM data...
	I1205 20:21:19.584627  582281 main.go:141] libmachine: Parsing certificate...
	I1205 20:21:19.584703  582281 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem
	I1205 20:21:19.584733  582281 main.go:141] libmachine: Decoding PEM data...
	I1205 20:21:19.584755  582281 main.go:141] libmachine: Parsing certificate...
	I1205 20:21:19.584782  582281 main.go:141] libmachine: Running pre-create checks...
	I1205 20:21:19.584793  582281 main.go:141] libmachine: (no-preload-816185) Calling .PreCreateCheck
	I1205 20:21:19.585171  582281 main.go:141] libmachine: (no-preload-816185) Calling .GetConfigRaw
	I1205 20:21:19.585656  582281 main.go:141] libmachine: Creating machine...
	I1205 20:21:19.585675  582281 main.go:141] libmachine: (no-preload-816185) Calling .Create
	I1205 20:21:19.585861  582281 main.go:141] libmachine: (no-preload-816185) Creating KVM machine...
	I1205 20:21:19.587504  582281 main.go:141] libmachine: (no-preload-816185) DBG | found existing default KVM network
	I1205 20:21:19.589451  582281 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:21:19.589223  582421 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:f5:aa:7e} reservation:<nil>}
	I1205 20:21:19.590508  582281 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:21:19.590406  582421 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:6a:77:44} reservation:<nil>}
	I1205 20:21:19.591999  582281 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:21:19.591899  582421 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00030eac0}
	I1205 20:21:19.592050  582281 main.go:141] libmachine: (no-preload-816185) DBG | created network xml: 
	I1205 20:21:19.592078  582281 main.go:141] libmachine: (no-preload-816185) DBG | <network>
	I1205 20:21:19.592091  582281 main.go:141] libmachine: (no-preload-816185) DBG |   <name>mk-no-preload-816185</name>
	I1205 20:21:19.592103  582281 main.go:141] libmachine: (no-preload-816185) DBG |   <dns enable='no'/>
	I1205 20:21:19.592114  582281 main.go:141] libmachine: (no-preload-816185) DBG |   
	I1205 20:21:19.592121  582281 main.go:141] libmachine: (no-preload-816185) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1205 20:21:19.592132  582281 main.go:141] libmachine: (no-preload-816185) DBG |     <dhcp>
	I1205 20:21:19.592144  582281 main.go:141] libmachine: (no-preload-816185) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1205 20:21:19.592169  582281 main.go:141] libmachine: (no-preload-816185) DBG |     </dhcp>
	I1205 20:21:19.592189  582281 main.go:141] libmachine: (no-preload-816185) DBG |   </ip>
	I1205 20:21:19.592198  582281 main.go:141] libmachine: (no-preload-816185) DBG |   
	I1205 20:21:19.592204  582281 main.go:141] libmachine: (no-preload-816185) DBG | </network>
	I1205 20:21:19.592214  582281 main.go:141] libmachine: (no-preload-816185) DBG | 
	I1205 20:21:19.597933  582281 main.go:141] libmachine: (no-preload-816185) DBG | trying to create private KVM network mk-no-preload-816185 192.168.61.0/24...
	I1205 20:21:19.674695  582281 main.go:141] libmachine: (no-preload-816185) DBG | private KVM network mk-no-preload-816185 192.168.61.0/24 created
	I1205 20:21:19.674729  582281 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:21:19.674648  582421 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 20:21:19.674742  582281 main.go:141] libmachine: (no-preload-816185) Setting up store path in /home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185 ...
	I1205 20:21:19.674761  582281 main.go:141] libmachine: (no-preload-816185) Building disk image from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 20:21:19.674777  582281 main.go:141] libmachine: (no-preload-816185) Downloading /home/jenkins/minikube-integration/20052-530897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 20:21:19.947820  582281 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:21:19.947620  582421 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa...
	I1205 20:21:20.337293  582281 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:21:20.337113  582421 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/no-preload-816185.rawdisk...
	I1205 20:21:20.337335  582281 main.go:141] libmachine: (no-preload-816185) DBG | Writing magic tar header
	I1205 20:21:20.337356  582281 main.go:141] libmachine: (no-preload-816185) DBG | Writing SSH key tar header
	I1205 20:21:20.337367  582281 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:21:20.337258  582421 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185 ...
	I1205 20:21:20.337385  582281 main.go:141] libmachine: (no-preload-816185) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185
	I1205 20:21:20.337488  582281 main.go:141] libmachine: (no-preload-816185) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185 (perms=drwx------)
	I1205 20:21:20.337521  582281 main.go:141] libmachine: (no-preload-816185) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines
	I1205 20:21:20.337533  582281 main.go:141] libmachine: (no-preload-816185) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines (perms=drwxr-xr-x)
	I1205 20:21:20.337579  582281 main.go:141] libmachine: (no-preload-816185) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 20:21:20.337597  582281 main.go:141] libmachine: (no-preload-816185) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube (perms=drwxr-xr-x)
	I1205 20:21:20.337616  582281 main.go:141] libmachine: (no-preload-816185) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897 (perms=drwxrwxr-x)
	I1205 20:21:20.337625  582281 main.go:141] libmachine: (no-preload-816185) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 20:21:20.337635  582281 main.go:141] libmachine: (no-preload-816185) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897
	I1205 20:21:20.337648  582281 main.go:141] libmachine: (no-preload-816185) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 20:21:20.337657  582281 main.go:141] libmachine: (no-preload-816185) DBG | Checking permissions on dir: /home/jenkins
	I1205 20:21:20.337667  582281 main.go:141] libmachine: (no-preload-816185) DBG | Checking permissions on dir: /home
	I1205 20:21:20.337674  582281 main.go:141] libmachine: (no-preload-816185) DBG | Skipping /home - not owner
	I1205 20:21:20.337687  582281 main.go:141] libmachine: (no-preload-816185) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 20:21:20.337702  582281 main.go:141] libmachine: (no-preload-816185) Creating domain...
	I1205 20:21:20.338851  582281 main.go:141] libmachine: (no-preload-816185) define libvirt domain using xml: 
	I1205 20:21:20.338874  582281 main.go:141] libmachine: (no-preload-816185) <domain type='kvm'>
	I1205 20:21:20.338898  582281 main.go:141] libmachine: (no-preload-816185)   <name>no-preload-816185</name>
	I1205 20:21:20.338914  582281 main.go:141] libmachine: (no-preload-816185)   <memory unit='MiB'>2200</memory>
	I1205 20:21:20.338923  582281 main.go:141] libmachine: (no-preload-816185)   <vcpu>2</vcpu>
	I1205 20:21:20.338930  582281 main.go:141] libmachine: (no-preload-816185)   <features>
	I1205 20:21:20.338938  582281 main.go:141] libmachine: (no-preload-816185)     <acpi/>
	I1205 20:21:20.338949  582281 main.go:141] libmachine: (no-preload-816185)     <apic/>
	I1205 20:21:20.338960  582281 main.go:141] libmachine: (no-preload-816185)     <pae/>
	I1205 20:21:20.338966  582281 main.go:141] libmachine: (no-preload-816185)     
	I1205 20:21:20.338977  582281 main.go:141] libmachine: (no-preload-816185)   </features>
	I1205 20:21:20.338993  582281 main.go:141] libmachine: (no-preload-816185)   <cpu mode='host-passthrough'>
	I1205 20:21:20.339002  582281 main.go:141] libmachine: (no-preload-816185)   
	I1205 20:21:20.339013  582281 main.go:141] libmachine: (no-preload-816185)   </cpu>
	I1205 20:21:20.339041  582281 main.go:141] libmachine: (no-preload-816185)   <os>
	I1205 20:21:20.339072  582281 main.go:141] libmachine: (no-preload-816185)     <type>hvm</type>
	I1205 20:21:20.339082  582281 main.go:141] libmachine: (no-preload-816185)     <boot dev='cdrom'/>
	I1205 20:21:20.339092  582281 main.go:141] libmachine: (no-preload-816185)     <boot dev='hd'/>
	I1205 20:21:20.339101  582281 main.go:141] libmachine: (no-preload-816185)     <bootmenu enable='no'/>
	I1205 20:21:20.339111  582281 main.go:141] libmachine: (no-preload-816185)   </os>
	I1205 20:21:20.339120  582281 main.go:141] libmachine: (no-preload-816185)   <devices>
	I1205 20:21:20.339131  582281 main.go:141] libmachine: (no-preload-816185)     <disk type='file' device='cdrom'>
	I1205 20:21:20.339148  582281 main.go:141] libmachine: (no-preload-816185)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/boot2docker.iso'/>
	I1205 20:21:20.339159  582281 main.go:141] libmachine: (no-preload-816185)       <target dev='hdc' bus='scsi'/>
	I1205 20:21:20.339170  582281 main.go:141] libmachine: (no-preload-816185)       <readonly/>
	I1205 20:21:20.339178  582281 main.go:141] libmachine: (no-preload-816185)     </disk>
	I1205 20:21:20.339192  582281 main.go:141] libmachine: (no-preload-816185)     <disk type='file' device='disk'>
	I1205 20:21:20.339213  582281 main.go:141] libmachine: (no-preload-816185)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 20:21:20.339233  582281 main.go:141] libmachine: (no-preload-816185)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/no-preload-816185.rawdisk'/>
	I1205 20:21:20.339244  582281 main.go:141] libmachine: (no-preload-816185)       <target dev='hda' bus='virtio'/>
	I1205 20:21:20.339253  582281 main.go:141] libmachine: (no-preload-816185)     </disk>
	I1205 20:21:20.339264  582281 main.go:141] libmachine: (no-preload-816185)     <interface type='network'>
	I1205 20:21:20.339275  582281 main.go:141] libmachine: (no-preload-816185)       <source network='mk-no-preload-816185'/>
	I1205 20:21:20.339285  582281 main.go:141] libmachine: (no-preload-816185)       <model type='virtio'/>
	I1205 20:21:20.339293  582281 main.go:141] libmachine: (no-preload-816185)     </interface>
	I1205 20:21:20.339303  582281 main.go:141] libmachine: (no-preload-816185)     <interface type='network'>
	I1205 20:21:20.339323  582281 main.go:141] libmachine: (no-preload-816185)       <source network='default'/>
	I1205 20:21:20.339331  582281 main.go:141] libmachine: (no-preload-816185)       <model type='virtio'/>
	I1205 20:21:20.339339  582281 main.go:141] libmachine: (no-preload-816185)     </interface>
	I1205 20:21:20.339345  582281 main.go:141] libmachine: (no-preload-816185)     <serial type='pty'>
	I1205 20:21:20.339353  582281 main.go:141] libmachine: (no-preload-816185)       <target port='0'/>
	I1205 20:21:20.339359  582281 main.go:141] libmachine: (no-preload-816185)     </serial>
	I1205 20:21:20.339367  582281 main.go:141] libmachine: (no-preload-816185)     <console type='pty'>
	I1205 20:21:20.339374  582281 main.go:141] libmachine: (no-preload-816185)       <target type='serial' port='0'/>
	I1205 20:21:20.339382  582281 main.go:141] libmachine: (no-preload-816185)     </console>
	I1205 20:21:20.339393  582281 main.go:141] libmachine: (no-preload-816185)     <rng model='virtio'>
	I1205 20:21:20.339403  582281 main.go:141] libmachine: (no-preload-816185)       <backend model='random'>/dev/random</backend>
	I1205 20:21:20.339413  582281 main.go:141] libmachine: (no-preload-816185)     </rng>
	I1205 20:21:20.339420  582281 main.go:141] libmachine: (no-preload-816185)     
	I1205 20:21:20.339429  582281 main.go:141] libmachine: (no-preload-816185)     
	I1205 20:21:20.339436  582281 main.go:141] libmachine: (no-preload-816185)   </devices>
	I1205 20:21:20.339446  582281 main.go:141] libmachine: (no-preload-816185) </domain>
	I1205 20:21:20.339457  582281 main.go:141] libmachine: (no-preload-816185) 
	I1205 20:21:20.343728  582281 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:cc:5c:fa in network default
	I1205 20:21:20.344415  582281 main.go:141] libmachine: (no-preload-816185) Ensuring networks are active...
	I1205 20:21:20.344448  582281 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:21:20.345102  582281 main.go:141] libmachine: (no-preload-816185) Ensuring network default is active
	I1205 20:21:20.345540  582281 main.go:141] libmachine: (no-preload-816185) Ensuring network mk-no-preload-816185 is active
	I1205 20:21:20.346121  582281 main.go:141] libmachine: (no-preload-816185) Getting domain xml...
	I1205 20:21:20.346886  582281 main.go:141] libmachine: (no-preload-816185) Creating domain...
	I1205 20:21:21.799158  582281 main.go:141] libmachine: (no-preload-816185) Waiting to get IP...
	I1205 20:21:21.800327  582281 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:21:21.800836  582281 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:21:21.800916  582281 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:21:21.800838  582421 retry.go:31] will retry after 189.62787ms: waiting for machine to come up
	I1205 20:21:21.992592  582281 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:21:21.993314  582281 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:21:21.993338  582281 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:21:21.993281  582421 retry.go:31] will retry after 266.100978ms: waiting for machine to come up
	I1205 20:21:22.261033  582281 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:21:22.261651  582281 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:21:22.261689  582281 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:21:22.261571  582421 retry.go:31] will retry after 425.427897ms: waiting for machine to come up
	I1205 20:21:22.688362  582281 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:21:22.688886  582281 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:21:22.688919  582281 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:21:22.688865  582421 retry.go:31] will retry after 382.972479ms: waiting for machine to come up
	I1205 20:21:23.073613  582281 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:21:23.074121  582281 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:21:23.074157  582281 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:21:23.074055  582421 retry.go:31] will retry after 576.472708ms: waiting for machine to come up
	I1205 20:21:19.315980  581730 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:21:19.316005  581730 machine.go:96] duration metric: took 10.004430811s to provisionDockerMachine
	I1205 20:21:19.316028  581730 start.go:293] postStartSetup for "kubernetes-upgrade-886958" (driver="kvm2")
	I1205 20:21:19.316040  581730 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:21:19.316059  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .DriverName
	I1205 20:21:19.316397  581730 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:21:19.316437  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHHostname
	I1205 20:21:19.319437  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:21:19.319871  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f0:89", ip: ""} in network mk-kubernetes-upgrade-886958: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:35 +0000 UTC Type:0 Mac:52:54:00:d3:f0:89 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-886958 Clientid:01:52:54:00:d3:f0:89}
	I1205 20:21:19.319909  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined IP address 192.168.39.144 and MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:21:19.320043  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHPort
	I1205 20:21:19.320245  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHKeyPath
	I1205 20:21:19.320455  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHUsername
	I1205 20:21:19.320613  581730 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/kubernetes-upgrade-886958/id_rsa Username:docker}
	I1205 20:21:19.404385  581730 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:21:19.409363  581730 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:21:19.409408  581730 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:21:19.409528  581730 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:21:19.409673  581730 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:21:19.409842  581730 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:21:19.420971  581730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:21:19.446696  581730 start.go:296] duration metric: took 130.646943ms for postStartSetup
	I1205 20:21:19.446752  581730 fix.go:56] duration metric: took 10.160944731s for fixHost
	I1205 20:21:19.446780  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHHostname
	I1205 20:21:19.450014  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:21:19.450447  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f0:89", ip: ""} in network mk-kubernetes-upgrade-886958: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:35 +0000 UTC Type:0 Mac:52:54:00:d3:f0:89 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-886958 Clientid:01:52:54:00:d3:f0:89}
	I1205 20:21:19.450481  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined IP address 192.168.39.144 and MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:21:19.450644  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHPort
	I1205 20:21:19.450854  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHKeyPath
	I1205 20:21:19.451022  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHKeyPath
	I1205 20:21:19.451244  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHUsername
	I1205 20:21:19.451470  581730 main.go:141] libmachine: Using SSH client type: native
	I1205 20:21:19.451671  581730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I1205 20:21:19.451692  581730 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:21:19.561859  581730 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430079.554148366
	
	I1205 20:21:19.561890  581730 fix.go:216] guest clock: 1733430079.554148366
	I1205 20:21:19.561899  581730 fix.go:229] Guest: 2024-12-05 20:21:19.554148366 +0000 UTC Remote: 2024-12-05 20:21:19.446757435 +0000 UTC m=+56.041917180 (delta=107.390931ms)
	I1205 20:21:19.561928  581730 fix.go:200] guest clock delta is within tolerance: 107.390931ms
	I1205 20:21:19.561937  581730 start.go:83] releasing machines lock for "kubernetes-upgrade-886958", held for 10.276168302s
	I1205 20:21:19.561965  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .DriverName
	I1205 20:21:19.562289  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetIP
	I1205 20:21:19.565668  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:21:19.566174  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f0:89", ip: ""} in network mk-kubernetes-upgrade-886958: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:35 +0000 UTC Type:0 Mac:52:54:00:d3:f0:89 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-886958 Clientid:01:52:54:00:d3:f0:89}
	I1205 20:21:19.566204  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined IP address 192.168.39.144 and MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:21:19.566416  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .DriverName
	I1205 20:21:19.567086  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .DriverName
	I1205 20:21:19.567284  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .DriverName
	I1205 20:21:19.567384  581730 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:21:19.567430  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHHostname
	I1205 20:21:19.567511  581730 ssh_runner.go:195] Run: cat /version.json
	I1205 20:21:19.567530  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHHostname
	I1205 20:21:19.570350  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:21:19.570486  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:21:19.570727  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f0:89", ip: ""} in network mk-kubernetes-upgrade-886958: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:35 +0000 UTC Type:0 Mac:52:54:00:d3:f0:89 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-886958 Clientid:01:52:54:00:d3:f0:89}
	I1205 20:21:19.570780  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined IP address 192.168.39.144 and MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:21:19.570810  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f0:89", ip: ""} in network mk-kubernetes-upgrade-886958: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:35 +0000 UTC Type:0 Mac:52:54:00:d3:f0:89 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-886958 Clientid:01:52:54:00:d3:f0:89}
	I1205 20:21:19.570823  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined IP address 192.168.39.144 and MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:21:19.571118  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHPort
	I1205 20:21:19.571127  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHPort
	I1205 20:21:19.571339  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHKeyPath
	I1205 20:21:19.571339  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHKeyPath
	I1205 20:21:19.571520  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHUsername
	I1205 20:21:19.571536  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetSSHUsername
	I1205 20:21:19.571661  581730 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/kubernetes-upgrade-886958/id_rsa Username:docker}
	I1205 20:21:19.571687  581730 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/kubernetes-upgrade-886958/id_rsa Username:docker}
	I1205 20:21:19.651471  581730 ssh_runner.go:195] Run: systemctl --version
	I1205 20:21:19.677596  581730 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:21:19.860849  581730 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:21:19.888118  581730 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:21:19.888239  581730 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:21:19.955955  581730 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 20:21:19.955987  581730 start.go:495] detecting cgroup driver to use...
	I1205 20:21:19.956060  581730 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:21:20.025436  581730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:21:20.117313  581730 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:21:20.117389  581730 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:21:20.166162  581730 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:21:20.310641  581730 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:21:20.692103  581730 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:21:21.244219  581730 docker.go:233] disabling docker service ...
	I1205 20:21:21.244378  581730 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:21:21.394321  581730 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:21:21.477079  581730 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:21:21.807891  581730 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:21:22.055636  581730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:21:22.075978  581730 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:21:22.098101  581730 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:21:22.098179  581730 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:21:22.114658  581730 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:21:22.114749  581730 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:21:22.132174  581730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:21:22.148990  581730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:21:22.166052  581730 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:21:22.185580  581730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:21:22.249906  581730 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:21:22.279812  581730 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:21:22.341601  581730 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:21:22.414947  581730 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:21:22.446212  581730 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:21:22.745638  581730 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:21:23.431572  581730 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:21:23.431656  581730 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:21:22.334071  581232 out.go:235]   - Booting up control plane ...
	I1205 20:21:22.334225  581232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:21:22.351953  581232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:21:22.353215  581232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:21:22.354180  581232 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:21:22.360481  581232 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:21:23.458451  581730 start.go:563] Will wait 60s for crictl version
	I1205 20:21:23.458547  581730 ssh_runner.go:195] Run: which crictl
	I1205 20:21:23.475683  581730 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:21:23.646856  581730 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:21:23.646956  581730 ssh_runner.go:195] Run: crio --version
	I1205 20:21:23.854531  581730 ssh_runner.go:195] Run: crio --version
	I1205 20:21:24.074052  581730 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:21:23.651919  582281 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:21:23.652468  582281 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:21:23.652503  582281 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:21:23.652394  582421 retry.go:31] will retry after 733.97996ms: waiting for machine to come up
	I1205 20:21:24.388516  582281 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:21:24.389131  582281 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:21:24.389166  582281 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:21:24.389054  582421 retry.go:31] will retry after 976.770792ms: waiting for machine to come up
	I1205 20:21:25.366993  582281 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:21:25.367579  582281 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:21:25.367608  582281 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:21:25.367525  582421 retry.go:31] will retry after 1.155026145s: waiting for machine to come up
	I1205 20:21:26.524003  582281 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:21:26.524511  582281 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:21:26.524534  582281 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:21:26.524460  582421 retry.go:31] will retry after 1.436453392s: waiting for machine to come up
	I1205 20:21:27.963082  582281 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:21:27.963748  582281 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:21:27.963776  582281 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:21:27.963671  582421 retry.go:31] will retry after 1.438850624s: waiting for machine to come up
	I1205 20:21:24.075757  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) Calling .GetIP
	I1205 20:21:24.080098  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:21:24.080623  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:f0:89", ip: ""} in network mk-kubernetes-upgrade-886958: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:35 +0000 UTC Type:0 Mac:52:54:00:d3:f0:89 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-886958 Clientid:01:52:54:00:d3:f0:89}
	I1205 20:21:24.080702  581730 main.go:141] libmachine: (kubernetes-upgrade-886958) DBG | domain kubernetes-upgrade-886958 has defined IP address 192.168.39.144 and MAC address 52:54:00:d3:f0:89 in network mk-kubernetes-upgrade-886958
	I1205 20:21:24.080985  581730 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:21:24.108095  581730 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-886958 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:kubernetes-upgrade-886958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:21:24.108252  581730 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:21:24.108347  581730 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:21:24.224602  581730 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:21:24.224633  581730 crio.go:433] Images already preloaded, skipping extraction
	I1205 20:21:24.224695  581730 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:21:24.289024  581730 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:21:24.289060  581730 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:21:24.289071  581730 kubeadm.go:934] updating node { 192.168.39.144 8443 v1.31.2 crio true true} ...
	I1205 20:21:24.289239  581730 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-886958 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-886958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:21:24.289350  581730 ssh_runner.go:195] Run: crio config
	I1205 20:21:24.414522  581730 cni.go:84] Creating CNI manager for ""
	I1205 20:21:24.414551  581730 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:21:24.414565  581730 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:21:24.414599  581730 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.144 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-886958 NodeName:kubernetes-upgrade-886958 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:21:24.414782  581730 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-886958"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.144"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.144"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:21:24.414868  581730 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:21:24.433567  581730 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:21:24.433672  581730 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:21:24.444633  581730 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I1205 20:21:24.499613  581730 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:21:24.520774  581730 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1205 20:21:24.547859  581730 ssh_runner.go:195] Run: grep 192.168.39.144	control-plane.minikube.internal$ /etc/hosts
	I1205 20:21:24.555335  581730 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:21:24.733137  581730 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:21:24.748146  581730 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958 for IP: 192.168.39.144
	I1205 20:21:24.748182  581730 certs.go:194] generating shared ca certs ...
	I1205 20:21:24.748207  581730 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:21:24.748460  581730 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:21:24.748527  581730 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:21:24.748542  581730 certs.go:256] generating profile certs ...
	I1205 20:21:24.748650  581730 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/client.key
	I1205 20:21:24.748713  581730 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/apiserver.key.0467c358
	I1205 20:21:24.748759  581730 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/proxy-client.key
	I1205 20:21:24.748906  581730 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:21:24.748941  581730 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:21:24.748951  581730 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:21:24.748974  581730 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:21:24.749004  581730 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:21:24.749043  581730 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:21:24.749104  581730 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:21:24.749765  581730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:21:24.779528  581730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:21:24.807189  581730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:21:24.835737  581730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:21:24.868540  581730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1205 20:21:24.899640  581730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:21:24.929024  581730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:21:24.959135  581730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/kubernetes-upgrade-886958/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:21:24.985597  581730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:21:25.012245  581730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:21:25.044336  581730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:21:25.076917  581730 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:21:25.097096  581730 ssh_runner.go:195] Run: openssl version
	I1205 20:21:25.103624  581730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:21:25.115490  581730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:21:25.122090  581730 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:21:25.122155  581730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:21:25.128772  581730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:21:25.142164  581730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:21:25.157407  581730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:21:25.162588  581730 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:21:25.162666  581730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:21:25.168978  581730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:21:25.180132  581730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:21:25.195880  581730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:21:25.200983  581730 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:21:25.201066  581730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:21:25.207312  581730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:21:25.217282  581730 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:21:25.223250  581730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:21:25.229879  581730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:21:25.236425  581730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:21:25.243003  581730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:21:25.250453  581730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:21:25.257045  581730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:21:25.263306  581730 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-886958 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.2 ClusterName:kubernetes-upgrade-886958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:21:25.263442  581730 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:21:25.263504  581730 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:21:25.302740  581730 cri.go:89] found id: "180b43b72d0704b4982dab57907cb180321161b22f7e77b7e7840df1ffc7b13d"
	I1205 20:21:25.302768  581730 cri.go:89] found id: "0aa84a4afc1194659d4f57050dc79b16a732231eb09838060e0f3f5f220826c2"
	I1205 20:21:25.302775  581730 cri.go:89] found id: "98f7ccc8f9c9535bbe71bf62d3c1f7de78e276297a7e0df197307316a10f9349"
	I1205 20:21:25.302779  581730 cri.go:89] found id: "e9ceac1ea3de43c8108d3d1e9f20419b2fc41f2c281a9f6a8f518cffdf2a9ec2"
	I1205 20:21:25.302784  581730 cri.go:89] found id: "f8d9ca04f9c8d9090ebb18b6979ea449d11e1700da8b16001f389202828d9fe9"
	I1205 20:21:25.302788  581730 cri.go:89] found id: "82d49b113410b818e7be867e3d2bdc3f330c84981f9ddff7537a98a9ab2e2069"
	I1205 20:21:25.302793  581730 cri.go:89] found id: "31bb1d675e733bb4beaa63fb8185c4e1189f8156e7fc05f7baa2a9e1b14d54bc"
	I1205 20:21:25.302797  581730 cri.go:89] found id: "45ceb5d18c1b5148eaf6d702046da342581073d2372057dd05ed886c4130c5b8"
	I1205 20:21:25.302801  581730 cri.go:89] found id: ""
	I1205 20:21:25.302891  581730 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-886958 -n kubernetes-upgrade-886958
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-886958 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-886958" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-886958
--- FAIL: TestKubernetesUpgrade (443.92s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (48.84s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-594992 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-594992 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.429021695s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-594992] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20052
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-594992" primary control-plane node in "pause-594992" cluster
	* Updating the running kvm2 "pause-594992" VM ...
	* Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-594992" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:17:19.949305  576117 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:17:19.949457  576117 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:17:19.949470  576117 out.go:358] Setting ErrFile to fd 2...
	I1205 20:17:19.949478  576117 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:17:19.949710  576117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 20:17:19.950373  576117 out.go:352] Setting JSON to false
	I1205 20:17:19.951448  576117 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":10786,"bootTime":1733419054,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:17:19.951571  576117 start.go:139] virtualization: kvm guest
	I1205 20:17:19.953742  576117 out.go:177] * [pause-594992] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:17:19.955246  576117 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 20:17:19.955246  576117 notify.go:220] Checking for updates...
	I1205 20:17:19.957632  576117 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:17:19.958946  576117 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:17:19.960187  576117 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 20:17:19.961453  576117 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:17:19.962795  576117 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:17:19.964530  576117 config.go:182] Loaded profile config "pause-594992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:17:19.965107  576117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:17:19.965205  576117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:17:19.986009  576117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36213
	I1205 20:17:19.986752  576117 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:17:19.987468  576117 main.go:141] libmachine: Using API Version  1
	I1205 20:17:19.987504  576117 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:17:19.988378  576117 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:17:19.988563  576117 main.go:141] libmachine: (pause-594992) Calling .DriverName
	I1205 20:17:19.988830  576117 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:17:19.989283  576117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:17:19.989352  576117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:17:20.012074  576117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46311
	I1205 20:17:20.012519  576117 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:17:20.013113  576117 main.go:141] libmachine: Using API Version  1
	I1205 20:17:20.013139  576117 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:17:20.013495  576117 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:17:20.013742  576117 main.go:141] libmachine: (pause-594992) Calling .DriverName
	I1205 20:17:20.057646  576117 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 20:17:20.060587  576117 start.go:297] selected driver: kvm2
	I1205 20:17:20.060617  576117 start.go:901] validating driver "kvm2" against &{Name:pause-594992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.2 ClusterName:pause-594992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:17:20.060834  576117 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:17:20.061345  576117 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:17:20.061463  576117 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20052-530897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:17:20.085348  576117 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 20:17:20.086174  576117 cni.go:84] Creating CNI manager for ""
	I1205 20:17:20.086233  576117 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:17:20.086305  576117 start.go:340] cluster config:
	{Name:pause-594992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:pause-594992 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false
registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:17:20.086472  576117 iso.go:125] acquiring lock: {Name:mk778929df466edaca8cb6d38427acedfae32b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:17:20.088358  576117 out.go:177] * Starting "pause-594992" primary control-plane node in "pause-594992" cluster
	I1205 20:17:20.089825  576117 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:17:20.089883  576117 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 20:17:20.089898  576117 cache.go:56] Caching tarball of preloaded images
	I1205 20:17:20.090031  576117 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:17:20.090048  576117 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 20:17:20.090233  576117 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/pause-594992/config.json ...
	I1205 20:17:20.090649  576117 start.go:360] acquireMachinesLock for pause-594992: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:17:20.090744  576117 start.go:364] duration metric: took 58.19µs to acquireMachinesLock for "pause-594992"
	I1205 20:17:20.090768  576117 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:17:20.090776  576117 fix.go:54] fixHost starting: 
	I1205 20:17:20.091155  576117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:17:20.091211  576117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:17:20.111530  576117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37043
	I1205 20:17:20.112198  576117 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:17:20.112879  576117 main.go:141] libmachine: Using API Version  1
	I1205 20:17:20.112907  576117 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:17:20.113434  576117 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:17:20.113639  576117 main.go:141] libmachine: (pause-594992) Calling .DriverName
	I1205 20:17:20.113806  576117 main.go:141] libmachine: (pause-594992) Calling .GetState
	I1205 20:17:20.116110  576117 fix.go:112] recreateIfNeeded on pause-594992: state=Running err=<nil>
	W1205 20:17:20.116153  576117 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 20:17:20.119078  576117 out.go:177] * Updating the running kvm2 "pause-594992" VM ...
	I1205 20:17:20.120372  576117 machine.go:93] provisionDockerMachine start ...
	I1205 20:17:20.120398  576117 main.go:141] libmachine: (pause-594992) Calling .DriverName
	I1205 20:17:20.120652  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHHostname
	I1205 20:17:20.123888  576117 main.go:141] libmachine: (pause-594992) DBG | domain pause-594992 has defined MAC address 52:54:00:a9:de:17 in network mk-pause-594992
	I1205 20:17:20.124409  576117 main.go:141] libmachine: (pause-594992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:de:17", ip: ""} in network mk-pause-594992: {Iface:virbr2 ExpiryTime:2024-12-05 21:16:36 +0000 UTC Type:0 Mac:52:54:00:a9:de:17 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-594992 Clientid:01:52:54:00:a9:de:17}
	I1205 20:17:20.124453  576117 main.go:141] libmachine: (pause-594992) DBG | domain pause-594992 has defined IP address 192.168.50.246 and MAC address 52:54:00:a9:de:17 in network mk-pause-594992
	I1205 20:17:20.124730  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHPort
	I1205 20:17:20.124924  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHKeyPath
	I1205 20:17:20.125068  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHKeyPath
	I1205 20:17:20.125239  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHUsername
	I1205 20:17:20.125397  576117 main.go:141] libmachine: Using SSH client type: native
	I1205 20:17:20.125600  576117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I1205 20:17:20.125607  576117 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:17:20.254412  576117 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-594992
	
	I1205 20:17:20.254445  576117 main.go:141] libmachine: (pause-594992) Calling .GetMachineName
	I1205 20:17:20.254732  576117 buildroot.go:166] provisioning hostname "pause-594992"
	I1205 20:17:20.254819  576117 main.go:141] libmachine: (pause-594992) Calling .GetMachineName
	I1205 20:17:20.255028  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHHostname
	I1205 20:17:20.258790  576117 main.go:141] libmachine: (pause-594992) DBG | domain pause-594992 has defined MAC address 52:54:00:a9:de:17 in network mk-pause-594992
	I1205 20:17:20.259262  576117 main.go:141] libmachine: (pause-594992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:de:17", ip: ""} in network mk-pause-594992: {Iface:virbr2 ExpiryTime:2024-12-05 21:16:36 +0000 UTC Type:0 Mac:52:54:00:a9:de:17 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-594992 Clientid:01:52:54:00:a9:de:17}
	I1205 20:17:20.259326  576117 main.go:141] libmachine: (pause-594992) DBG | domain pause-594992 has defined IP address 192.168.50.246 and MAC address 52:54:00:a9:de:17 in network mk-pause-594992
	I1205 20:17:20.259546  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHPort
	I1205 20:17:20.259751  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHKeyPath
	I1205 20:17:20.259899  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHKeyPath
	I1205 20:17:20.260101  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHUsername
	I1205 20:17:20.260348  576117 main.go:141] libmachine: Using SSH client type: native
	I1205 20:17:20.260555  576117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I1205 20:17:20.260572  576117 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-594992 && echo "pause-594992" | sudo tee /etc/hostname
	I1205 20:17:20.420898  576117 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-594992
	
	I1205 20:17:20.420928  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHHostname
	I1205 20:17:20.424099  576117 main.go:141] libmachine: (pause-594992) DBG | domain pause-594992 has defined MAC address 52:54:00:a9:de:17 in network mk-pause-594992
	I1205 20:17:20.424489  576117 main.go:141] libmachine: (pause-594992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:de:17", ip: ""} in network mk-pause-594992: {Iface:virbr2 ExpiryTime:2024-12-05 21:16:36 +0000 UTC Type:0 Mac:52:54:00:a9:de:17 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-594992 Clientid:01:52:54:00:a9:de:17}
	I1205 20:17:20.424528  576117 main.go:141] libmachine: (pause-594992) DBG | domain pause-594992 has defined IP address 192.168.50.246 and MAC address 52:54:00:a9:de:17 in network mk-pause-594992
	I1205 20:17:20.424834  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHPort
	I1205 20:17:20.425029  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHKeyPath
	I1205 20:17:20.425185  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHKeyPath
	I1205 20:17:20.425318  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHUsername
	I1205 20:17:20.425459  576117 main.go:141] libmachine: Using SSH client type: native
	I1205 20:17:20.425658  576117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I1205 20:17:20.425670  576117 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-594992' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-594992/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-594992' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:17:20.550108  576117 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:17:20.550152  576117 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:17:20.550219  576117 buildroot.go:174] setting up certificates
	I1205 20:17:20.550236  576117 provision.go:84] configureAuth start
	I1205 20:17:20.550256  576117 main.go:141] libmachine: (pause-594992) Calling .GetMachineName
	I1205 20:17:20.550555  576117 main.go:141] libmachine: (pause-594992) Calling .GetIP
	I1205 20:17:20.553893  576117 main.go:141] libmachine: (pause-594992) DBG | domain pause-594992 has defined MAC address 52:54:00:a9:de:17 in network mk-pause-594992
	I1205 20:17:20.554408  576117 main.go:141] libmachine: (pause-594992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:de:17", ip: ""} in network mk-pause-594992: {Iface:virbr2 ExpiryTime:2024-12-05 21:16:36 +0000 UTC Type:0 Mac:52:54:00:a9:de:17 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-594992 Clientid:01:52:54:00:a9:de:17}
	I1205 20:17:20.554444  576117 main.go:141] libmachine: (pause-594992) DBG | domain pause-594992 has defined IP address 192.168.50.246 and MAC address 52:54:00:a9:de:17 in network mk-pause-594992
	I1205 20:17:20.554707  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHHostname
	I1205 20:17:20.557642  576117 main.go:141] libmachine: (pause-594992) DBG | domain pause-594992 has defined MAC address 52:54:00:a9:de:17 in network mk-pause-594992
	I1205 20:17:20.558097  576117 main.go:141] libmachine: (pause-594992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:de:17", ip: ""} in network mk-pause-594992: {Iface:virbr2 ExpiryTime:2024-12-05 21:16:36 +0000 UTC Type:0 Mac:52:54:00:a9:de:17 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-594992 Clientid:01:52:54:00:a9:de:17}
	I1205 20:17:20.558125  576117 main.go:141] libmachine: (pause-594992) DBG | domain pause-594992 has defined IP address 192.168.50.246 and MAC address 52:54:00:a9:de:17 in network mk-pause-594992
	I1205 20:17:20.558322  576117 provision.go:143] copyHostCerts
	I1205 20:17:20.558400  576117 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:17:20.558421  576117 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:17:20.558476  576117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:17:20.558581  576117 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:17:20.558590  576117 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:17:20.558609  576117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:17:20.558726  576117 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:17:20.558737  576117 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:17:20.558756  576117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:17:20.558804  576117 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.pause-594992 san=[127.0.0.1 192.168.50.246 localhost minikube pause-594992]
	I1205 20:17:20.783581  576117 provision.go:177] copyRemoteCerts
	I1205 20:17:20.783672  576117 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:17:20.783706  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHHostname
	I1205 20:17:20.787273  576117 main.go:141] libmachine: (pause-594992) DBG | domain pause-594992 has defined MAC address 52:54:00:a9:de:17 in network mk-pause-594992
	I1205 20:17:20.787756  576117 main.go:141] libmachine: (pause-594992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:de:17", ip: ""} in network mk-pause-594992: {Iface:virbr2 ExpiryTime:2024-12-05 21:16:36 +0000 UTC Type:0 Mac:52:54:00:a9:de:17 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-594992 Clientid:01:52:54:00:a9:de:17}
	I1205 20:17:20.787791  576117 main.go:141] libmachine: (pause-594992) DBG | domain pause-594992 has defined IP address 192.168.50.246 and MAC address 52:54:00:a9:de:17 in network mk-pause-594992
	I1205 20:17:20.788010  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHPort
	I1205 20:17:20.788345  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHKeyPath
	I1205 20:17:20.788575  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHUsername
	I1205 20:17:20.788769  576117 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/pause-594992/id_rsa Username:docker}
	I1205 20:17:20.888050  576117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:17:20.928023  576117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 20:17:20.970053  576117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:17:21.010578  576117 provision.go:87] duration metric: took 460.321131ms to configureAuth
	I1205 20:17:21.010617  576117 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:17:21.010860  576117 config.go:182] Loaded profile config "pause-594992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:17:21.010944  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHHostname
	I1205 20:17:21.014231  576117 main.go:141] libmachine: (pause-594992) DBG | domain pause-594992 has defined MAC address 52:54:00:a9:de:17 in network mk-pause-594992
	I1205 20:17:21.014762  576117 main.go:141] libmachine: (pause-594992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:de:17", ip: ""} in network mk-pause-594992: {Iface:virbr2 ExpiryTime:2024-12-05 21:16:36 +0000 UTC Type:0 Mac:52:54:00:a9:de:17 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-594992 Clientid:01:52:54:00:a9:de:17}
	I1205 20:17:21.014800  576117 main.go:141] libmachine: (pause-594992) DBG | domain pause-594992 has defined IP address 192.168.50.246 and MAC address 52:54:00:a9:de:17 in network mk-pause-594992
	I1205 20:17:21.015148  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHPort
	I1205 20:17:21.015433  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHKeyPath
	I1205 20:17:21.015615  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHKeyPath
	I1205 20:17:21.015752  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHUsername
	I1205 20:17:21.016008  576117 main.go:141] libmachine: Using SSH client type: native
	I1205 20:17:21.016255  576117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I1205 20:17:21.016305  576117 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:17:26.584150  576117 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:17:26.584191  576117 machine.go:96] duration metric: took 6.463804441s to provisionDockerMachine
	I1205 20:17:26.584220  576117 start.go:293] postStartSetup for "pause-594992" (driver="kvm2")
	I1205 20:17:26.584261  576117 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:17:26.584316  576117 main.go:141] libmachine: (pause-594992) Calling .DriverName
	I1205 20:17:26.584769  576117 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:17:26.584801  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHHostname
	I1205 20:17:26.587756  576117 main.go:141] libmachine: (pause-594992) DBG | domain pause-594992 has defined MAC address 52:54:00:a9:de:17 in network mk-pause-594992
	I1205 20:17:26.588154  576117 main.go:141] libmachine: (pause-594992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:de:17", ip: ""} in network mk-pause-594992: {Iface:virbr2 ExpiryTime:2024-12-05 21:16:36 +0000 UTC Type:0 Mac:52:54:00:a9:de:17 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-594992 Clientid:01:52:54:00:a9:de:17}
	I1205 20:17:26.588197  576117 main.go:141] libmachine: (pause-594992) DBG | domain pause-594992 has defined IP address 192.168.50.246 and MAC address 52:54:00:a9:de:17 in network mk-pause-594992
	I1205 20:17:26.588376  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHPort
	I1205 20:17:26.588600  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHKeyPath
	I1205 20:17:26.588801  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHUsername
	I1205 20:17:26.588920  576117 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/pause-594992/id_rsa Username:docker}
	I1205 20:17:26.675941  576117 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:17:26.681468  576117 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:17:26.681494  576117 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:17:26.681561  576117 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:17:26.681634  576117 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:17:26.681721  576117 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:17:26.694432  576117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:17:26.723581  576117 start.go:296] duration metric: took 139.34215ms for postStartSetup
	I1205 20:17:26.723630  576117 fix.go:56] duration metric: took 6.632853777s for fixHost
	I1205 20:17:26.723658  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHHostname
	I1205 20:17:26.726899  576117 main.go:141] libmachine: (pause-594992) DBG | domain pause-594992 has defined MAC address 52:54:00:a9:de:17 in network mk-pause-594992
	I1205 20:17:26.727283  576117 main.go:141] libmachine: (pause-594992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:de:17", ip: ""} in network mk-pause-594992: {Iface:virbr2 ExpiryTime:2024-12-05 21:16:36 +0000 UTC Type:0 Mac:52:54:00:a9:de:17 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-594992 Clientid:01:52:54:00:a9:de:17}
	I1205 20:17:26.727320  576117 main.go:141] libmachine: (pause-594992) DBG | domain pause-594992 has defined IP address 192.168.50.246 and MAC address 52:54:00:a9:de:17 in network mk-pause-594992
	I1205 20:17:26.727593  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHPort
	I1205 20:17:26.727946  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHKeyPath
	I1205 20:17:26.728126  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHKeyPath
	I1205 20:17:26.728364  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHUsername
	I1205 20:17:26.728557  576117 main.go:141] libmachine: Using SSH client type: native
	I1205 20:17:26.728757  576117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.246 22 <nil> <nil>}
	I1205 20:17:26.728770  576117 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:17:26.846198  576117 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733429846.828538102
	
	I1205 20:17:26.846238  576117 fix.go:216] guest clock: 1733429846.828538102
	I1205 20:17:26.846249  576117 fix.go:229] Guest: 2024-12-05 20:17:26.828538102 +0000 UTC Remote: 2024-12-05 20:17:26.723635569 +0000 UTC m=+6.830969677 (delta=104.902533ms)
	I1205 20:17:26.846307  576117 fix.go:200] guest clock delta is within tolerance: 104.902533ms
	I1205 20:17:26.846330  576117 start.go:83] releasing machines lock for "pause-594992", held for 6.755570389s
	I1205 20:17:26.846359  576117 main.go:141] libmachine: (pause-594992) Calling .DriverName
	I1205 20:17:26.846660  576117 main.go:141] libmachine: (pause-594992) Calling .GetIP
	I1205 20:17:26.850155  576117 main.go:141] libmachine: (pause-594992) DBG | domain pause-594992 has defined MAC address 52:54:00:a9:de:17 in network mk-pause-594992
	I1205 20:17:26.850666  576117 main.go:141] libmachine: (pause-594992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:de:17", ip: ""} in network mk-pause-594992: {Iface:virbr2 ExpiryTime:2024-12-05 21:16:36 +0000 UTC Type:0 Mac:52:54:00:a9:de:17 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-594992 Clientid:01:52:54:00:a9:de:17}
	I1205 20:17:26.850696  576117 main.go:141] libmachine: (pause-594992) DBG | domain pause-594992 has defined IP address 192.168.50.246 and MAC address 52:54:00:a9:de:17 in network mk-pause-594992
	I1205 20:17:26.850870  576117 main.go:141] libmachine: (pause-594992) Calling .DriverName
	I1205 20:17:26.851490  576117 main.go:141] libmachine: (pause-594992) Calling .DriverName
	I1205 20:17:26.851696  576117 main.go:141] libmachine: (pause-594992) Calling .DriverName
	I1205 20:17:26.851823  576117 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:17:26.851882  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHHostname
	I1205 20:17:26.851932  576117 ssh_runner.go:195] Run: cat /version.json
	I1205 20:17:26.851959  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHHostname
	I1205 20:17:26.854885  576117 main.go:141] libmachine: (pause-594992) DBG | domain pause-594992 has defined MAC address 52:54:00:a9:de:17 in network mk-pause-594992
	I1205 20:17:26.855300  576117 main.go:141] libmachine: (pause-594992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:de:17", ip: ""} in network mk-pause-594992: {Iface:virbr2 ExpiryTime:2024-12-05 21:16:36 +0000 UTC Type:0 Mac:52:54:00:a9:de:17 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-594992 Clientid:01:52:54:00:a9:de:17}
	I1205 20:17:26.855341  576117 main.go:141] libmachine: (pause-594992) DBG | domain pause-594992 has defined IP address 192.168.50.246 and MAC address 52:54:00:a9:de:17 in network mk-pause-594992
	I1205 20:17:26.855360  576117 main.go:141] libmachine: (pause-594992) DBG | domain pause-594992 has defined MAC address 52:54:00:a9:de:17 in network mk-pause-594992
	I1205 20:17:26.855603  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHPort
	I1205 20:17:26.855780  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHKeyPath
	I1205 20:17:26.855891  576117 main.go:141] libmachine: (pause-594992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:de:17", ip: ""} in network mk-pause-594992: {Iface:virbr2 ExpiryTime:2024-12-05 21:16:36 +0000 UTC Type:0 Mac:52:54:00:a9:de:17 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-594992 Clientid:01:52:54:00:a9:de:17}
	I1205 20:17:26.855927  576117 main.go:141] libmachine: (pause-594992) DBG | domain pause-594992 has defined IP address 192.168.50.246 and MAC address 52:54:00:a9:de:17 in network mk-pause-594992
	I1205 20:17:26.855932  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHUsername
	I1205 20:17:26.856085  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHPort
	I1205 20:17:26.856105  576117 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/pause-594992/id_rsa Username:docker}
	I1205 20:17:26.856207  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHKeyPath
	I1205 20:17:26.856395  576117 main.go:141] libmachine: (pause-594992) Calling .GetSSHUsername
	I1205 20:17:26.856563  576117 sshutil.go:53] new ssh client: &{IP:192.168.50.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/pause-594992/id_rsa Username:docker}
	I1205 20:17:26.963101  576117 ssh_runner.go:195] Run: systemctl --version
	I1205 20:17:26.969832  576117 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:17:27.124441  576117 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:17:27.132137  576117 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:17:27.132239  576117 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:17:27.142406  576117 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 20:17:27.142437  576117 start.go:495] detecting cgroup driver to use...
	I1205 20:17:27.142520  576117 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:17:27.160093  576117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:17:27.176395  576117 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:17:27.176478  576117 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:17:27.191572  576117 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:17:27.208129  576117 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:17:27.339344  576117 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:17:27.472598  576117 docker.go:233] disabling docker service ...
	I1205 20:17:27.472679  576117 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:17:27.493023  576117 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:17:27.508686  576117 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:17:27.643037  576117 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:17:27.808200  576117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:17:27.823606  576117 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:17:27.845210  576117 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:17:27.845288  576117 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:17:27.856759  576117 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:17:27.856841  576117 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:17:27.868064  576117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:17:27.881156  576117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:17:27.896404  576117 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:17:27.915099  576117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:17:27.930687  576117 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:17:27.944837  576117 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:17:27.957846  576117 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:17:27.969984  576117 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:17:27.983004  576117 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:17:28.118611  576117 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:17:28.771990  576117 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:17:28.772065  576117 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:17:28.778045  576117 start.go:563] Will wait 60s for crictl version
	I1205 20:17:28.778104  576117 ssh_runner.go:195] Run: which crictl
	I1205 20:17:28.782542  576117 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:17:28.817635  576117 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:17:28.817719  576117 ssh_runner.go:195] Run: crio --version
	I1205 20:17:28.848688  576117 ssh_runner.go:195] Run: crio --version
	I1205 20:17:28.880338  576117 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:17:28.881996  576117 main.go:141] libmachine: (pause-594992) Calling .GetIP
	I1205 20:17:28.884918  576117 main.go:141] libmachine: (pause-594992) DBG | domain pause-594992 has defined MAC address 52:54:00:a9:de:17 in network mk-pause-594992
	I1205 20:17:28.885438  576117 main.go:141] libmachine: (pause-594992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:de:17", ip: ""} in network mk-pause-594992: {Iface:virbr2 ExpiryTime:2024-12-05 21:16:36 +0000 UTC Type:0 Mac:52:54:00:a9:de:17 Iaid: IPaddr:192.168.50.246 Prefix:24 Hostname:pause-594992 Clientid:01:52:54:00:a9:de:17}
	I1205 20:17:28.885472  576117 main.go:141] libmachine: (pause-594992) DBG | domain pause-594992 has defined IP address 192.168.50.246 and MAC address 52:54:00:a9:de:17 in network mk-pause-594992
	I1205 20:17:28.885761  576117 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1205 20:17:28.890577  576117 kubeadm.go:883] updating cluster {Name:pause-594992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2
ClusterName:pause-594992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-p
lugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:17:28.890709  576117 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:17:28.890752  576117 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:17:28.939276  576117 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:17:28.939302  576117 crio.go:433] Images already preloaded, skipping extraction
	I1205 20:17:28.939357  576117 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:17:28.975320  576117 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:17:28.975346  576117 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:17:28.975354  576117 kubeadm.go:934] updating node { 192.168.50.246 8443 v1.31.2 crio true true} ...
	I1205 20:17:28.975465  576117 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-594992 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:pause-594992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:17:28.975530  576117 ssh_runner.go:195] Run: crio config
	I1205 20:17:29.026768  576117 cni.go:84] Creating CNI manager for ""
	I1205 20:17:29.026795  576117 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:17:29.026808  576117 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:17:29.026841  576117 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.246 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-594992 NodeName:pause-594992 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:17:29.027021  576117 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-594992"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.246"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.246"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:17:29.027113  576117 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:17:29.038554  576117 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:17:29.038676  576117 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:17:29.049875  576117 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1205 20:17:29.068070  576117 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:17:29.086506  576117 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I1205 20:17:29.104694  576117 ssh_runner.go:195] Run: grep 192.168.50.246	control-plane.minikube.internal$ /etc/hosts
	I1205 20:17:29.109030  576117 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:17:29.241042  576117 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:17:29.257198  576117 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/pause-594992 for IP: 192.168.50.246
	I1205 20:17:29.257224  576117 certs.go:194] generating shared ca certs ...
	I1205 20:17:29.257240  576117 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:17:29.257444  576117 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:17:29.257498  576117 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:17:29.257513  576117 certs.go:256] generating profile certs ...
	I1205 20:17:29.257615  576117 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/pause-594992/client.key
	I1205 20:17:29.257691  576117 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/pause-594992/apiserver.key.6e8564ca
	I1205 20:17:29.257749  576117 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/pause-594992/proxy-client.key
	I1205 20:17:29.257910  576117 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:17:29.257952  576117 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:17:29.257968  576117 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:17:29.258005  576117 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:17:29.258037  576117 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:17:29.258078  576117 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:17:29.258133  576117 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:17:29.258790  576117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:17:29.291927  576117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:17:29.352524  576117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:17:29.437699  576117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:17:29.639262  576117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/pause-594992/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1205 20:17:29.834781  576117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/pause-594992/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:17:30.088302  576117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/pause-594992/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:17:30.211061  576117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/pause-594992/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:17:30.288215  576117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:17:30.369808  576117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:17:30.475796  576117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:17:30.521524  576117 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:17:30.545789  576117 ssh_runner.go:195] Run: openssl version
	I1205 20:17:30.553752  576117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:17:30.567846  576117 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:17:30.575294  576117 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:17:30.575379  576117 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:17:30.586273  576117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:17:30.601906  576117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:17:30.621467  576117 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:17:30.639819  576117 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:17:30.639900  576117 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:17:30.655710  576117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:17:30.688730  576117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:17:30.714883  576117 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:17:30.720754  576117 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:17:30.720841  576117 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:17:30.730056  576117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:17:30.743799  576117 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:17:30.752414  576117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:17:30.760357  576117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:17:30.768012  576117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:17:30.774701  576117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:17:30.781340  576117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:17:30.788127  576117 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:17:30.797716  576117 kubeadm.go:392] StartCluster: {Name:pause-594992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:pause-594992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plug
in:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:17:30.797875  576117 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:17:30.797967  576117 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:17:30.882705  576117 cri.go:89] found id: "495c73deed76c0d6dbdfd63738005f5bdcb035585abe2d2bf533e9fc5990d163"
	I1205 20:17:30.882745  576117 cri.go:89] found id: "5d4b65f2c05d5039cdee981da2fec37671762524ea220af215394d893a9d090e"
	I1205 20:17:30.882751  576117 cri.go:89] found id: "03439c2516853cca606e7485a51dbd0b7d6d1c2eeb7f602460f4f7399f17ef0b"
	I1205 20:17:30.882757  576117 cri.go:89] found id: "520bd43d560d042506a61ee26beabaae5115f81728340ced635de2657d5fea4f"
	I1205 20:17:30.882761  576117 cri.go:89] found id: "b496d56cafd2d6a7afe7553c461e588c295e2f6ff2764a4a06e194e1d20399cb"
	I1205 20:17:30.882766  576117 cri.go:89] found id: "8ad4a3ea6f36235b3d837268ecdefb24951435a4edf008b112588ba3f5f83916"
	I1205 20:17:30.882771  576117 cri.go:89] found id: "1b7784163a2d0c3ff601cfa74dddb5bc0dff81deb3f01c7fc1f26feba42387d3"
	I1205 20:17:30.882775  576117 cri.go:89] found id: "a0585ef4ee5ad21ecbfa844d67bbca5d1fecf69dad43cfa7ac6126bdf42997a0"
	I1205 20:17:30.882780  576117 cri.go:89] found id: "2886efe6ebde53691a3e99cfe076bbafeb217dc2edeaa371f7099189d74a5fa6"
	I1205 20:17:30.882793  576117 cri.go:89] found id: "5fc0b5765d3e201741369457b198bb9ec5a61a5675008e978e435957501f01f8"
	I1205 20:17:30.882797  576117 cri.go:89] found id: "05b4a3bd5214c727f059ec8c2342426f28fd49d9a43bdb17d7fdaa7477b4a723"
	I1205 20:17:30.882801  576117 cri.go:89] found id: "1e5850bc705289c2026062e6bb62731933aade93243ad68fa62e12b574758614"
	I1205 20:17:30.882805  576117 cri.go:89] found id: ""
	I1205 20:17:30.882891  576117 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-594992 -n pause-594992
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-594992 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-594992 logs -n 25: (1.503908015s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p scheduled-stop-898791       | scheduled-stop-898791     | jenkins | v1.34.0 | 05 Dec 24 20:13 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-898791       | scheduled-stop-898791     | jenkins | v1.34.0 | 05 Dec 24 20:13 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-898791       | scheduled-stop-898791     | jenkins | v1.34.0 | 05 Dec 24 20:13 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-898791       | scheduled-stop-898791     | jenkins | v1.34.0 | 05 Dec 24 20:13 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-898791       | scheduled-stop-898791     | jenkins | v1.34.0 | 05 Dec 24 20:13 UTC | 05 Dec 24 20:13 UTC |
	|         | --cancel-scheduled             |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-898791       | scheduled-stop-898791     | jenkins | v1.34.0 | 05 Dec 24 20:13 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-898791       | scheduled-stop-898791     | jenkins | v1.34.0 | 05 Dec 24 20:13 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-898791       | scheduled-stop-898791     | jenkins | v1.34.0 | 05 Dec 24 20:13 UTC | 05 Dec 24 20:13 UTC |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-898791       | scheduled-stop-898791     | jenkins | v1.34.0 | 05 Dec 24 20:14 UTC | 05 Dec 24 20:14 UTC |
	| start   | -p kubernetes-upgrade-886958   | kubernetes-upgrade-886958 | jenkins | v1.34.0 | 05 Dec 24 20:14 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p offline-crio-974924         | offline-crio-974924       | jenkins | v1.34.0 | 05 Dec 24 20:14 UTC | 05 Dec 24 20:16 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-617890      | minikube                  | jenkins | v1.26.0 | 05 Dec 24 20:14 UTC | 05 Dec 24 20:16 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-899594      | minikube                  | jenkins | v1.26.0 | 05 Dec 24 20:14 UTC | 05 Dec 24 20:15 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-899594 stop    | minikube                  | jenkins | v1.26.0 | 05 Dec 24 20:15 UTC | 05 Dec 24 20:16 UTC |
	| start   | -p stopped-upgrade-899594      | stopped-upgrade-899594    | jenkins | v1.34.0 | 05 Dec 24 20:16 UTC | 05 Dec 24 20:16 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p offline-crio-974924         | offline-crio-974924       | jenkins | v1.34.0 | 05 Dec 24 20:16 UTC | 05 Dec 24 20:16 UTC |
	| start   | -p pause-594992 --memory=2048  | pause-594992              | jenkins | v1.34.0 | 05 Dec 24 20:16 UTC | 05 Dec 24 20:17 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-617890      | running-upgrade-617890    | jenkins | v1.34.0 | 05 Dec 24 20:16 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-899594      | stopped-upgrade-899594    | jenkins | v1.34.0 | 05 Dec 24 20:16 UTC | 05 Dec 24 20:16 UTC |
	| start   | -p NoKubernetes-739327         | NoKubernetes-739327       | jenkins | v1.34.0 | 05 Dec 24 20:16 UTC |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-739327         | NoKubernetes-739327       | jenkins | v1.34.0 | 05 Dec 24 20:16 UTC | 05 Dec 24 20:17 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-594992                | pause-594992              | jenkins | v1.34.0 | 05 Dec 24 20:17 UTC | 05 Dec 24 20:18 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-739327         | NoKubernetes-739327       | jenkins | v1.34.0 | 05 Dec 24 20:17 UTC | 05 Dec 24 20:17 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-739327         | NoKubernetes-739327       | jenkins | v1.34.0 | 05 Dec 24 20:17 UTC | 05 Dec 24 20:17 UTC |
	| start   | -p NoKubernetes-739327         | NoKubernetes-739327       | jenkins | v1.34.0 | 05 Dec 24 20:17 UTC |                     |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 20:17:43
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:17:43.266397  576500 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:17:43.266503  576500 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:17:43.266507  576500 out.go:358] Setting ErrFile to fd 2...
	I1205 20:17:43.266510  576500 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:17:43.266706  576500 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 20:17:43.267275  576500 out.go:352] Setting JSON to false
	I1205 20:17:43.268382  576500 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":10809,"bootTime":1733419054,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:17:43.268477  576500 start.go:139] virtualization: kvm guest
	I1205 20:17:43.270981  576500 out.go:177] * [NoKubernetes-739327] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:17:43.272498  576500 notify.go:220] Checking for updates...
	I1205 20:17:43.272502  576500 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 20:17:43.274104  576500 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:17:43.275581  576500 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:17:43.277074  576500 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 20:17:43.278674  576500 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:17:43.280138  576500 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:17:43.282205  576500 config.go:182] Loaded profile config "kubernetes-upgrade-886958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 20:17:43.282408  576500 config.go:182] Loaded profile config "pause-594992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:17:43.282527  576500 config.go:182] Loaded profile config "running-upgrade-617890": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1205 20:17:43.282555  576500 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1205 20:17:43.282682  576500 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:17:43.320943  576500 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 20:17:43.322392  576500 start.go:297] selected driver: kvm2
	I1205 20:17:43.322403  576500 start.go:901] validating driver "kvm2" against <nil>
	I1205 20:17:43.322420  576500 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:17:43.322818  576500 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1205 20:17:43.322922  576500 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:17:43.323017  576500 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20052-530897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:17:43.339246  576500 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 20:17:43.339289  576500 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 20:17:43.339809  576500 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1205 20:17:43.339960  576500 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 20:17:43.339982  576500 cni.go:84] Creating CNI manager for ""
	I1205 20:17:43.340027  576500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:17:43.340031  576500 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 20:17:43.340042  576500 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1205 20:17:43.340114  576500 start.go:340] cluster config:
	{Name:NoKubernetes-739327 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-739327 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:17:43.340256  576500 iso.go:125] acquiring lock: {Name:mk778929df466edaca8cb6d38427acedfae32b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:17:43.342091  576500 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-739327
	I1205 20:17:43.343298  576500 preload.go:131] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W1205 20:17:43.453219  576500 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1205 20:17:43.453421  576500 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/NoKubernetes-739327/config.json ...
	I1205 20:17:43.453454  576500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/NoKubernetes-739327/config.json: {Name:mk3972a45e368dbc345926c535f87626ea849c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:17:43.453597  576500 start.go:360] acquireMachinesLock for NoKubernetes-739327: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:17:43.453626  576500 start.go:364] duration metric: took 19.985µs to acquireMachinesLock for "NoKubernetes-739327"
	I1205 20:17:43.453636  576500 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-739327 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-739327 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:17:43.453694  576500 start.go:125] createHost starting for "" (driver="kvm2")
	I1205 20:17:43.920387  576117 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 495c73deed76c0d6dbdfd63738005f5bdcb035585abe2d2bf533e9fc5990d163 5d4b65f2c05d5039cdee981da2fec37671762524ea220af215394d893a9d090e 03439c2516853cca606e7485a51dbd0b7d6d1c2eeb7f602460f4f7399f17ef0b 520bd43d560d042506a61ee26beabaae5115f81728340ced635de2657d5fea4f b496d56cafd2d6a7afe7553c461e588c295e2f6ff2764a4a06e194e1d20399cb 8ad4a3ea6f36235b3d837268ecdefb24951435a4edf008b112588ba3f5f83916 1b7784163a2d0c3ff601cfa74dddb5bc0dff81deb3f01c7fc1f26feba42387d3 a0585ef4ee5ad21ecbfa844d67bbca5d1fecf69dad43cfa7ac6126bdf42997a0 2886efe6ebde53691a3e99cfe076bbafeb217dc2edeaa371f7099189d74a5fa6 5fc0b5765d3e201741369457b198bb9ec5a61a5675008e978e435957501f01f8 05b4a3bd5214c727f059ec8c2342426f28fd49d9a43bdb17d7fdaa7477b4a723 1e5850bc705289c2026062e6bb62731933aade93243ad68fa62e12b574758614: (12.843041525s)
	W1205 20:17:43.920501  576117 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 495c73deed76c0d6dbdfd63738005f5bdcb035585abe2d2bf533e9fc5990d163 5d4b65f2c05d5039cdee981da2fec37671762524ea220af215394d893a9d090e 03439c2516853cca606e7485a51dbd0b7d6d1c2eeb7f602460f4f7399f17ef0b 520bd43d560d042506a61ee26beabaae5115f81728340ced635de2657d5fea4f b496d56cafd2d6a7afe7553c461e588c295e2f6ff2764a4a06e194e1d20399cb 8ad4a3ea6f36235b3d837268ecdefb24951435a4edf008b112588ba3f5f83916 1b7784163a2d0c3ff601cfa74dddb5bc0dff81deb3f01c7fc1f26feba42387d3 a0585ef4ee5ad21ecbfa844d67bbca5d1fecf69dad43cfa7ac6126bdf42997a0 2886efe6ebde53691a3e99cfe076bbafeb217dc2edeaa371f7099189d74a5fa6 5fc0b5765d3e201741369457b198bb9ec5a61a5675008e978e435957501f01f8 05b4a3bd5214c727f059ec8c2342426f28fd49d9a43bdb17d7fdaa7477b4a723 1e5850bc705289c2026062e6bb62731933aade93243ad68fa62e12b574758614: Process exited with status 1
	stdout:
	495c73deed76c0d6dbdfd63738005f5bdcb035585abe2d2bf533e9fc5990d163
	5d4b65f2c05d5039cdee981da2fec37671762524ea220af215394d893a9d090e
	03439c2516853cca606e7485a51dbd0b7d6d1c2eeb7f602460f4f7399f17ef0b
	520bd43d560d042506a61ee26beabaae5115f81728340ced635de2657d5fea4f
	b496d56cafd2d6a7afe7553c461e588c295e2f6ff2764a4a06e194e1d20399cb
	8ad4a3ea6f36235b3d837268ecdefb24951435a4edf008b112588ba3f5f83916
	
	stderr:
	E1205 20:17:43.901177    2814 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b7784163a2d0c3ff601cfa74dddb5bc0dff81deb3f01c7fc1f26feba42387d3\": container with ID starting with 1b7784163a2d0c3ff601cfa74dddb5bc0dff81deb3f01c7fc1f26feba42387d3 not found: ID does not exist" containerID="1b7784163a2d0c3ff601cfa74dddb5bc0dff81deb3f01c7fc1f26feba42387d3"
	time="2024-12-05T20:17:43Z" level=fatal msg="stopping the container \"1b7784163a2d0c3ff601cfa74dddb5bc0dff81deb3f01c7fc1f26feba42387d3\": rpc error: code = NotFound desc = could not find container \"1b7784163a2d0c3ff601cfa74dddb5bc0dff81deb3f01c7fc1f26feba42387d3\": container with ID starting with 1b7784163a2d0c3ff601cfa74dddb5bc0dff81deb3f01c7fc1f26feba42387d3 not found: ID does not exist"
	I1205 20:17:43.920579  576117 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:17:43.968858  576117 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:17:43.980303  576117 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Dec  5 20:16 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Dec  5 20:16 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Dec  5 20:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Dec  5 20:16 /etc/kubernetes/scheduler.conf
	
	I1205 20:17:43.980380  576117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:17:43.990046  576117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:17:44.000032  576117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:17:44.012740  576117 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:17:44.012802  576117 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:17:44.026885  576117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:17:44.037280  576117 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:17:44.037353  576117 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:17:44.047380  576117 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:17:44.058495  576117 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:17:44.115209  576117 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:17:43.456024  576500 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	I1205 20:17:43.456261  576500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:17:43.456334  576500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:17:43.472647  576500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45695
	I1205 20:17:43.473207  576500 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:17:43.473885  576500 main.go:141] libmachine: Using API Version  1
	I1205 20:17:43.473908  576500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:17:43.474359  576500 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:17:43.474687  576500 main.go:141] libmachine: (NoKubernetes-739327) Calling .GetMachineName
	I1205 20:17:43.474882  576500 main.go:141] libmachine: (NoKubernetes-739327) Calling .DriverName
	I1205 20:17:43.475125  576500 start.go:159] libmachine.API.Create for "NoKubernetes-739327" (driver="kvm2")
	I1205 20:17:43.475165  576500 client.go:168] LocalClient.Create starting
	I1205 20:17:43.475198  576500 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem
	I1205 20:17:43.475235  576500 main.go:141] libmachine: Decoding PEM data...
	I1205 20:17:43.475252  576500 main.go:141] libmachine: Parsing certificate...
	I1205 20:17:43.475319  576500 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem
	I1205 20:17:43.475351  576500 main.go:141] libmachine: Decoding PEM data...
	I1205 20:17:43.475365  576500 main.go:141] libmachine: Parsing certificate...
	I1205 20:17:43.475388  576500 main.go:141] libmachine: Running pre-create checks...
	I1205 20:17:43.475396  576500 main.go:141] libmachine: (NoKubernetes-739327) Calling .PreCreateCheck
	I1205 20:17:43.475834  576500 main.go:141] libmachine: (NoKubernetes-739327) Calling .GetConfigRaw
	I1205 20:17:43.476321  576500 main.go:141] libmachine: Creating machine...
	I1205 20:17:43.476330  576500 main.go:141] libmachine: (NoKubernetes-739327) Calling .Create
	I1205 20:17:43.476513  576500 main.go:141] libmachine: (NoKubernetes-739327) Creating KVM machine...
	I1205 20:17:43.478008  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | found existing default KVM network
	I1205 20:17:43.480166  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:43.479953  576527 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:f5:aa:7e} reservation:<nil>}
	I1205 20:17:43.481435  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:43.481320  576527 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:10:91:27} reservation:<nil>}
	I1205 20:17:43.484172  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:43.484045  576527 network.go:209] skipping subnet 192.168.61.0/24 that is reserved: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 20:17:43.485166  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:43.485051  576527 network.go:211] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:66:e0:a4} reservation:<nil>}
	I1205 20:17:43.487537  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:43.486709  576527 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001193f0}
	I1205 20:17:43.487555  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | created network xml: 
	I1205 20:17:43.487565  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | <network>
	I1205 20:17:43.487572  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG |   <name>mk-NoKubernetes-739327</name>
	I1205 20:17:43.487579  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG |   <dns enable='no'/>
	I1205 20:17:43.487585  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG |   
	I1205 20:17:43.487601  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG |   <ip address='192.168.83.1' netmask='255.255.255.0'>
	I1205 20:17:43.487608  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG |     <dhcp>
	I1205 20:17:43.487616  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG |       <range start='192.168.83.2' end='192.168.83.253'/>
	I1205 20:17:43.487628  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG |     </dhcp>
	I1205 20:17:43.487635  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG |   </ip>
	I1205 20:17:43.487640  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG |   
	I1205 20:17:43.487647  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | </network>
	I1205 20:17:43.487652  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | 
	I1205 20:17:43.493671  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | trying to create private KVM network mk-NoKubernetes-739327 192.168.83.0/24...
	I1205 20:17:43.579342  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | private KVM network mk-NoKubernetes-739327 192.168.83.0/24 created
	I1205 20:17:43.579409  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:43.579300  576527 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 20:17:43.579451  576500 main.go:141] libmachine: (NoKubernetes-739327) Setting up store path in /home/jenkins/minikube-integration/20052-530897/.minikube/machines/NoKubernetes-739327 ...
	I1205 20:17:43.579471  576500 main.go:141] libmachine: (NoKubernetes-739327) Building disk image from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 20:17:43.579494  576500 main.go:141] libmachine: (NoKubernetes-739327) Downloading /home/jenkins/minikube-integration/20052-530897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 20:17:43.892195  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:43.892030  576527 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/NoKubernetes-739327/id_rsa...
	I1205 20:17:44.001971  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:44.001821  576527 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/NoKubernetes-739327/NoKubernetes-739327.rawdisk...
	I1205 20:17:44.001997  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | Writing magic tar header
	I1205 20:17:44.002016  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | Writing SSH key tar header
	I1205 20:17:44.002032  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:44.001985  576527 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/NoKubernetes-739327 ...
	I1205 20:17:44.002189  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/NoKubernetes-739327
	I1205 20:17:44.002215  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines
	I1205 20:17:44.002228  576500 main.go:141] libmachine: (NoKubernetes-739327) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/NoKubernetes-739327 (perms=drwx------)
	I1205 20:17:44.002238  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 20:17:44.002249  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897
	I1205 20:17:44.002256  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 20:17:44.002267  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | Checking permissions on dir: /home/jenkins
	I1205 20:17:44.002273  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | Checking permissions on dir: /home
	I1205 20:17:44.002282  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | Skipping /home - not owner
	I1205 20:17:44.002294  576500 main.go:141] libmachine: (NoKubernetes-739327) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines (perms=drwxr-xr-x)
	I1205 20:17:44.002303  576500 main.go:141] libmachine: (NoKubernetes-739327) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube (perms=drwxr-xr-x)
	I1205 20:17:44.002326  576500 main.go:141] libmachine: (NoKubernetes-739327) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897 (perms=drwxrwxr-x)
	I1205 20:17:44.002339  576500 main.go:141] libmachine: (NoKubernetes-739327) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 20:17:44.002349  576500 main.go:141] libmachine: (NoKubernetes-739327) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 20:17:44.002354  576500 main.go:141] libmachine: (NoKubernetes-739327) Creating domain...
	I1205 20:17:44.004060  576500 main.go:141] libmachine: (NoKubernetes-739327) define libvirt domain using xml: 
	I1205 20:17:44.004073  576500 main.go:141] libmachine: (NoKubernetes-739327) <domain type='kvm'>
	I1205 20:17:44.004085  576500 main.go:141] libmachine: (NoKubernetes-739327)   <name>NoKubernetes-739327</name>
	I1205 20:17:44.004092  576500 main.go:141] libmachine: (NoKubernetes-739327)   <memory unit='MiB'>6000</memory>
	I1205 20:17:44.004100  576500 main.go:141] libmachine: (NoKubernetes-739327)   <vcpu>2</vcpu>
	I1205 20:17:44.004109  576500 main.go:141] libmachine: (NoKubernetes-739327)   <features>
	I1205 20:17:44.004115  576500 main.go:141] libmachine: (NoKubernetes-739327)     <acpi/>
	I1205 20:17:44.004119  576500 main.go:141] libmachine: (NoKubernetes-739327)     <apic/>
	I1205 20:17:44.004133  576500 main.go:141] libmachine: (NoKubernetes-739327)     <pae/>
	I1205 20:17:44.004137  576500 main.go:141] libmachine: (NoKubernetes-739327)     
	I1205 20:17:44.004143  576500 main.go:141] libmachine: (NoKubernetes-739327)   </features>
	I1205 20:17:44.004147  576500 main.go:141] libmachine: (NoKubernetes-739327)   <cpu mode='host-passthrough'>
	I1205 20:17:44.004152  576500 main.go:141] libmachine: (NoKubernetes-739327)   
	I1205 20:17:44.004156  576500 main.go:141] libmachine: (NoKubernetes-739327)   </cpu>
	I1205 20:17:44.004162  576500 main.go:141] libmachine: (NoKubernetes-739327)   <os>
	I1205 20:17:44.004173  576500 main.go:141] libmachine: (NoKubernetes-739327)     <type>hvm</type>
	I1205 20:17:44.004179  576500 main.go:141] libmachine: (NoKubernetes-739327)     <boot dev='cdrom'/>
	I1205 20:17:44.004188  576500 main.go:141] libmachine: (NoKubernetes-739327)     <boot dev='hd'/>
	I1205 20:17:44.004198  576500 main.go:141] libmachine: (NoKubernetes-739327)     <bootmenu enable='no'/>
	I1205 20:17:44.004203  576500 main.go:141] libmachine: (NoKubernetes-739327)   </os>
	I1205 20:17:44.004208  576500 main.go:141] libmachine: (NoKubernetes-739327)   <devices>
	I1205 20:17:44.004214  576500 main.go:141] libmachine: (NoKubernetes-739327)     <disk type='file' device='cdrom'>
	I1205 20:17:44.004225  576500 main.go:141] libmachine: (NoKubernetes-739327)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/NoKubernetes-739327/boot2docker.iso'/>
	I1205 20:17:44.004231  576500 main.go:141] libmachine: (NoKubernetes-739327)       <target dev='hdc' bus='scsi'/>
	I1205 20:17:44.004237  576500 main.go:141] libmachine: (NoKubernetes-739327)       <readonly/>
	I1205 20:17:44.004242  576500 main.go:141] libmachine: (NoKubernetes-739327)     </disk>
	I1205 20:17:44.004250  576500 main.go:141] libmachine: (NoKubernetes-739327)     <disk type='file' device='disk'>
	I1205 20:17:44.004257  576500 main.go:141] libmachine: (NoKubernetes-739327)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 20:17:44.004332  576500 main.go:141] libmachine: (NoKubernetes-739327)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/NoKubernetes-739327/NoKubernetes-739327.rawdisk'/>
	I1205 20:17:44.004352  576500 main.go:141] libmachine: (NoKubernetes-739327)       <target dev='hda' bus='virtio'/>
	I1205 20:17:44.004361  576500 main.go:141] libmachine: (NoKubernetes-739327)     </disk>
	I1205 20:17:44.004368  576500 main.go:141] libmachine: (NoKubernetes-739327)     <interface type='network'>
	I1205 20:17:44.004386  576500 main.go:141] libmachine: (NoKubernetes-739327)       <source network='mk-NoKubernetes-739327'/>
	I1205 20:17:44.004392  576500 main.go:141] libmachine: (NoKubernetes-739327)       <model type='virtio'/>
	I1205 20:17:44.004400  576500 main.go:141] libmachine: (NoKubernetes-739327)     </interface>
	I1205 20:17:44.004405  576500 main.go:141] libmachine: (NoKubernetes-739327)     <interface type='network'>
	I1205 20:17:44.004413  576500 main.go:141] libmachine: (NoKubernetes-739327)       <source network='default'/>
	I1205 20:17:44.004419  576500 main.go:141] libmachine: (NoKubernetes-739327)       <model type='virtio'/>
	I1205 20:17:44.004426  576500 main.go:141] libmachine: (NoKubernetes-739327)     </interface>
	I1205 20:17:44.004432  576500 main.go:141] libmachine: (NoKubernetes-739327)     <serial type='pty'>
	I1205 20:17:44.004440  576500 main.go:141] libmachine: (NoKubernetes-739327)       <target port='0'/>
	I1205 20:17:44.004446  576500 main.go:141] libmachine: (NoKubernetes-739327)     </serial>
	I1205 20:17:44.004454  576500 main.go:141] libmachine: (NoKubernetes-739327)     <console type='pty'>
	I1205 20:17:44.004461  576500 main.go:141] libmachine: (NoKubernetes-739327)       <target type='serial' port='0'/>
	I1205 20:17:44.004468  576500 main.go:141] libmachine: (NoKubernetes-739327)     </console>
	I1205 20:17:44.004474  576500 main.go:141] libmachine: (NoKubernetes-739327)     <rng model='virtio'>
	I1205 20:17:44.004482  576500 main.go:141] libmachine: (NoKubernetes-739327)       <backend model='random'>/dev/random</backend>
	I1205 20:17:44.004487  576500 main.go:141] libmachine: (NoKubernetes-739327)     </rng>
	I1205 20:17:44.004493  576500 main.go:141] libmachine: (NoKubernetes-739327)     
	I1205 20:17:44.004498  576500 main.go:141] libmachine: (NoKubernetes-739327)     
	I1205 20:17:44.004505  576500 main.go:141] libmachine: (NoKubernetes-739327)   </devices>
	I1205 20:17:44.004511  576500 main.go:141] libmachine: (NoKubernetes-739327) </domain>
	I1205 20:17:44.004523  576500 main.go:141] libmachine: (NoKubernetes-739327) 
	I1205 20:17:44.009263  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:8d:63:cd in network default
	I1205 20:17:44.010014  576500 main.go:141] libmachine: (NoKubernetes-739327) Ensuring networks are active...
	I1205 20:17:44.010037  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:f4:c9:2d in network mk-NoKubernetes-739327
	I1205 20:17:44.010956  576500 main.go:141] libmachine: (NoKubernetes-739327) Ensuring network default is active
	I1205 20:17:44.011247  576500 main.go:141] libmachine: (NoKubernetes-739327) Ensuring network mk-NoKubernetes-739327 is active
	I1205 20:17:44.011978  576500 main.go:141] libmachine: (NoKubernetes-739327) Getting domain xml...
	I1205 20:17:44.012966  576500 main.go:141] libmachine: (NoKubernetes-739327) Creating domain...
	I1205 20:17:45.318034  576500 main.go:141] libmachine: (NoKubernetes-739327) Waiting to get IP...
	I1205 20:17:45.318936  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:f4:c9:2d in network mk-NoKubernetes-739327
	I1205 20:17:45.319391  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | unable to find current IP address of domain NoKubernetes-739327 in network mk-NoKubernetes-739327
	I1205 20:17:45.319434  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:45.319370  576527 retry.go:31] will retry after 267.529682ms: waiting for machine to come up
	I1205 20:17:45.589154  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:f4:c9:2d in network mk-NoKubernetes-739327
	I1205 20:17:45.589730  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | unable to find current IP address of domain NoKubernetes-739327 in network mk-NoKubernetes-739327
	I1205 20:17:45.589757  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:45.589645  576527 retry.go:31] will retry after 239.95428ms: waiting for machine to come up
	I1205 20:17:45.831212  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:f4:c9:2d in network mk-NoKubernetes-739327
	I1205 20:17:45.831736  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | unable to find current IP address of domain NoKubernetes-739327 in network mk-NoKubernetes-739327
	I1205 20:17:45.831758  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:45.831658  576527 retry.go:31] will retry after 315.686144ms: waiting for machine to come up
	I1205 20:17:46.149152  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:f4:c9:2d in network mk-NoKubernetes-739327
	I1205 20:17:46.149628  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | unable to find current IP address of domain NoKubernetes-739327 in network mk-NoKubernetes-739327
	I1205 20:17:46.149650  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:46.149584  576527 retry.go:31] will retry after 504.61278ms: waiting for machine to come up
	I1205 20:17:46.656468  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:f4:c9:2d in network mk-NoKubernetes-739327
	I1205 20:17:46.657044  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | unable to find current IP address of domain NoKubernetes-739327 in network mk-NoKubernetes-739327
	I1205 20:17:46.657064  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:46.656993  576527 retry.go:31] will retry after 576.866276ms: waiting for machine to come up
	I1205 20:17:47.235804  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:f4:c9:2d in network mk-NoKubernetes-739327
	I1205 20:17:47.236300  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | unable to find current IP address of domain NoKubernetes-739327 in network mk-NoKubernetes-739327
	I1205 20:17:47.236321  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:47.236291  576527 retry.go:31] will retry after 758.40512ms: waiting for machine to come up
	I1205 20:17:47.996023  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:f4:c9:2d in network mk-NoKubernetes-739327
	I1205 20:17:47.996626  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | unable to find current IP address of domain NoKubernetes-739327 in network mk-NoKubernetes-739327
	I1205 20:17:47.996647  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:47.996578  576527 retry.go:31] will retry after 902.687934ms: waiting for machine to come up
	I1205 20:17:45.195989  576117 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.080732641s)
	I1205 20:17:45.196034  576117 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:17:45.450094  576117 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:17:45.532646  576117 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:17:45.717580  576117 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:17:45.717696  576117 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:17:46.217999  576117 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:17:46.718772  576117 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:17:46.748637  576117 api_server.go:72] duration metric: took 1.031055079s to wait for apiserver process to appear ...
	I1205 20:17:46.748673  576117 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:17:46.748701  576117 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I1205 20:17:49.039383  576117 api_server.go:279] https://192.168.50.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:17:49.039418  576117 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:17:49.039436  576117 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I1205 20:17:49.047480  576117 api_server.go:279] https://192.168.50.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:17:49.047516  576117 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:17:49.248791  576117 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I1205 20:17:49.254267  576117 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:17:49.254293  576117 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:17:49.749458  576117 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I1205 20:17:49.754410  576117 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:17:49.754438  576117 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:17:50.249505  576117 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I1205 20:17:50.259633  576117 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:17:50.259677  576117 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:17:50.749177  576117 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I1205 20:17:50.763392  576117 api_server.go:279] https://192.168.50.246:8443/healthz returned 200:
	ok
	I1205 20:17:50.771564  576117 api_server.go:141] control plane version: v1.31.2
	I1205 20:17:50.771606  576117 api_server.go:131] duration metric: took 4.022924466s to wait for apiserver health ...
	I1205 20:17:50.771617  576117 cni.go:84] Creating CNI manager for ""
	I1205 20:17:50.771626  576117 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:17:50.773378  576117 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:17:50.928409  575390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	W1205 20:17:51.009918  575390 kubeadm.go:714] addon install failed, wil retry: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns": dial tcp 192.168.72.205:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I1205 20:17:51.009971  575390 kubeadm.go:597] duration metric: took 38.119172235s to restartPrimaryControlPlane
	W1205 20:17:51.010074  575390 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 20:17:51.010112  575390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:17:48.901201  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:f4:c9:2d in network mk-NoKubernetes-739327
	I1205 20:17:48.901731  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | unable to find current IP address of domain NoKubernetes-739327 in network mk-NoKubernetes-739327
	I1205 20:17:48.901748  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:48.901689  576527 retry.go:31] will retry after 1.229707548s: waiting for machine to come up
	I1205 20:17:50.133285  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:f4:c9:2d in network mk-NoKubernetes-739327
	I1205 20:17:50.133845  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | unable to find current IP address of domain NoKubernetes-739327 in network mk-NoKubernetes-739327
	I1205 20:17:50.133870  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:50.133778  576527 retry.go:31] will retry after 1.36134392s: waiting for machine to come up
	I1205 20:17:51.497233  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:f4:c9:2d in network mk-NoKubernetes-739327
	I1205 20:17:51.497707  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | unable to find current IP address of domain NoKubernetes-739327 in network mk-NoKubernetes-739327
	I1205 20:17:51.497738  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:51.497650  576527 retry.go:31] will retry after 1.794206833s: waiting for machine to come up
	I1205 20:17:53.794876  575390 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.784733298s)
	I1205 20:17:53.794975  575390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:17:53.812657  575390 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:17:53.824857  575390 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:17:53.836017  575390 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:17:53.836054  575390 kubeadm.go:157] found existing configuration files:
	
	I1205 20:17:53.836120  575390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf
	I1205 20:17:53.844175  575390 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:17:53.844288  575390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:17:53.855084  575390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf
	I1205 20:17:53.865499  575390 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:17:53.865581  575390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:17:53.876578  575390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf
	I1205 20:17:53.885252  575390 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:17:53.885348  575390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:17:53.896385  575390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf
	I1205 20:17:53.907071  575390 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:17:53.907148  575390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:17:53.918097  575390 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:17:53.959515  575390 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1205 20:17:53.959622  575390 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:17:54.085094  575390 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:17:54.085254  575390 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:17:54.085440  575390 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:17:54.239720  575390 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:17:50.774777  576117 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:17:50.792037  576117 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:17:50.823192  576117 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:17:50.823331  576117 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 20:17:50.823356  576117 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 20:17:50.838187  576117 system_pods.go:59] 6 kube-system pods found
	I1205 20:17:50.838243  576117 system_pods.go:61] "coredns-7c65d6cfc9-x529d" [0c29f67b-db11-4444-a0ed-18a831e6a5fe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:17:50.838257  576117 system_pods.go:61] "etcd-pause-594992" [bcc74ca9-37f0-4ab7-a3b9-a53b2d524754] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:17:50.838268  576117 system_pods.go:61] "kube-apiserver-pause-594992" [96a47b19-a5be-4ab2-89c9-296af79014cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:17:50.838282  576117 system_pods.go:61] "kube-controller-manager-pause-594992" [9f906a5e-85d3-4cf3-aeb0-8ad317ba6589] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:17:50.838297  576117 system_pods.go:61] "kube-proxy-jxr6b" [45d94ddc-c393-4083-807d-febc10b83bd5] Running
	I1205 20:17:50.838310  576117 system_pods.go:61] "kube-scheduler-pause-594992" [a9cea55a-87f2-4f79-96d3-318229726ded] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 20:17:50.838328  576117 system_pods.go:74] duration metric: took 15.105188ms to wait for pod list to return data ...
	I1205 20:17:50.838345  576117 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:17:50.843124  576117 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:17:50.843157  576117 node_conditions.go:123] node cpu capacity is 2
	I1205 20:17:50.843170  576117 node_conditions.go:105] duration metric: took 4.812608ms to run NodePressure ...
	I1205 20:17:50.843197  576117 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:17:51.121331  576117 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 20:17:51.127024  576117 kubeadm.go:739] kubelet initialised
	I1205 20:17:51.127065  576117 kubeadm.go:740] duration metric: took 5.696124ms waiting for restarted kubelet to initialise ...
	I1205 20:17:51.127077  576117 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:17:51.133535  576117 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-x529d" in "kube-system" namespace to be "Ready" ...
	I1205 20:17:51.142930  576117 pod_ready.go:93] pod "coredns-7c65d6cfc9-x529d" in "kube-system" namespace has status "Ready":"True"
	I1205 20:17:51.142953  576117 pod_ready.go:82] duration metric: took 9.393735ms for pod "coredns-7c65d6cfc9-x529d" in "kube-system" namespace to be "Ready" ...
	I1205 20:17:51.142965  576117 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:17:53.151157  576117 pod_ready.go:103] pod "etcd-pause-594992" in "kube-system" namespace has status "Ready":"False"
	I1205 20:17:54.242674  575390 out.go:235]   - Generating certificates and keys ...
	I1205 20:17:54.242778  575390 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:17:54.242908  575390 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:17:54.243048  575390 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:17:54.243141  575390 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:17:54.243258  575390 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:17:54.243337  575390 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 20:17:54.243429  575390 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:17:54.243534  575390 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:17:54.243639  575390 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:17:54.243756  575390 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:17:54.243829  575390 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 20:17:54.243920  575390 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:17:54.413660  575390 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:17:54.707434  575390 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:17:54.807580  575390 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:17:54.944901  575390 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:17:55.028951  575390 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:17:55.029906  575390 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:17:55.029975  575390 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:17:55.181662  575390 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:17:55.183260  575390 out.go:235]   - Booting up control plane ...
	I1205 20:17:55.183426  575390 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:17:55.186833  575390 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:17:55.187807  575390 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:17:55.188641  575390 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:17:55.190430  575390 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:17:53.293376  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:f4:c9:2d in network mk-NoKubernetes-739327
	I1205 20:17:53.293859  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | unable to find current IP address of domain NoKubernetes-739327 in network mk-NoKubernetes-739327
	I1205 20:17:53.293881  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:53.293796  576527 retry.go:31] will retry after 1.905908252s: waiting for machine to come up
	I1205 20:17:55.201586  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:f4:c9:2d in network mk-NoKubernetes-739327
	I1205 20:17:55.202144  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | unable to find current IP address of domain NoKubernetes-739327 in network mk-NoKubernetes-739327
	I1205 20:17:55.202170  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:55.202096  576527 retry.go:31] will retry after 2.625842394s: waiting for machine to come up
	I1205 20:17:57.830496  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:f4:c9:2d in network mk-NoKubernetes-739327
	I1205 20:17:57.831027  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | unable to find current IP address of domain NoKubernetes-739327 in network mk-NoKubernetes-739327
	I1205 20:17:57.831056  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:57.830952  576527 retry.go:31] will retry after 3.283441276s: waiting for machine to come up
	I1205 20:17:55.651384  576117 pod_ready.go:103] pod "etcd-pause-594992" in "kube-system" namespace has status "Ready":"False"
	I1205 20:17:58.150174  576117 pod_ready.go:103] pod "etcd-pause-594992" in "kube-system" namespace has status "Ready":"False"
	I1205 20:18:00.150481  576117 pod_ready.go:103] pod "etcd-pause-594992" in "kube-system" namespace has status "Ready":"False"
	I1205 20:18:00.650412  576117 pod_ready.go:93] pod "etcd-pause-594992" in "kube-system" namespace has status "Ready":"True"
	I1205 20:18:00.650440  576117 pod_ready.go:82] duration metric: took 9.507466804s for pod "etcd-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:00.650454  576117 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:00.656259  576117 pod_ready.go:93] pod "kube-apiserver-pause-594992" in "kube-system" namespace has status "Ready":"True"
	I1205 20:18:00.656308  576117 pod_ready.go:82] duration metric: took 5.844719ms for pod "kube-apiserver-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:00.656323  576117 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:00.662498  576117 pod_ready.go:93] pod "kube-controller-manager-pause-594992" in "kube-system" namespace has status "Ready":"True"
	I1205 20:18:00.662521  576117 pod_ready.go:82] duration metric: took 6.191357ms for pod "kube-controller-manager-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:00.662531  576117 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jxr6b" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:00.668070  576117 pod_ready.go:93] pod "kube-proxy-jxr6b" in "kube-system" namespace has status "Ready":"True"
	I1205 20:18:00.668092  576117 pod_ready.go:82] duration metric: took 5.55508ms for pod "kube-proxy-jxr6b" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:00.668100  576117 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:00.673570  576117 pod_ready.go:93] pod "kube-scheduler-pause-594992" in "kube-system" namespace has status "Ready":"True"
	I1205 20:18:00.673596  576117 pod_ready.go:82] duration metric: took 5.488053ms for pod "kube-scheduler-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:00.673605  576117 pod_ready.go:39] duration metric: took 9.546516314s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:18:00.673629  576117 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:18:00.687979  576117 ops.go:34] apiserver oom_adj: -16
	I1205 20:18:00.688029  576117 kubeadm.go:597] duration metric: took 29.712615502s to restartPrimaryControlPlane
	I1205 20:18:00.688043  576117 kubeadm.go:394] duration metric: took 29.890338567s to StartCluster
	I1205 20:18:00.688069  576117 settings.go:142] acquiring lock: {Name:mk53b9e6d652790a330d8f10370186624dd74692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:18:00.688169  576117 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:18:00.689119  576117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:18:00.689399  576117 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:18:00.689538  576117 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 20:18:00.689656  576117 config.go:182] Loaded profile config "pause-594992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:18:00.691335  576117 out.go:177] * Enabled addons: 
	I1205 20:18:00.691344  576117 out.go:177] * Verifying Kubernetes components...
	I1205 20:18:01.117806  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:f4:c9:2d in network mk-NoKubernetes-739327
	I1205 20:18:01.118357  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | unable to find current IP address of domain NoKubernetes-739327 in network mk-NoKubernetes-739327
	I1205 20:18:01.118380  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:18:01.118260  576527 retry.go:31] will retry after 5.355367005s: waiting for machine to come up
	I1205 20:18:00.692685  576117 addons.go:510] duration metric: took 3.154985ms for enable addons: enabled=[]
	I1205 20:18:00.692781  576117 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:18:00.855958  576117 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:18:00.871761  576117 node_ready.go:35] waiting up to 6m0s for node "pause-594992" to be "Ready" ...
	I1205 20:18:00.875090  576117 node_ready.go:49] node "pause-594992" has status "Ready":"True"
	I1205 20:18:00.875119  576117 node_ready.go:38] duration metric: took 3.322664ms for node "pause-594992" to be "Ready" ...
	I1205 20:18:00.875132  576117 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:18:01.049693  576117 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x529d" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:01.448260  576117 pod_ready.go:93] pod "coredns-7c65d6cfc9-x529d" in "kube-system" namespace has status "Ready":"True"
	I1205 20:18:01.448323  576117 pod_ready.go:82] duration metric: took 398.600753ms for pod "coredns-7c65d6cfc9-x529d" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:01.448335  576117 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:01.848344  576117 pod_ready.go:93] pod "etcd-pause-594992" in "kube-system" namespace has status "Ready":"True"
	I1205 20:18:01.848380  576117 pod_ready.go:82] duration metric: took 400.03721ms for pod "etcd-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:01.848395  576117 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:02.248436  576117 pod_ready.go:93] pod "kube-apiserver-pause-594992" in "kube-system" namespace has status "Ready":"True"
	I1205 20:18:02.248470  576117 pod_ready.go:82] duration metric: took 400.066946ms for pod "kube-apiserver-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:02.248486  576117 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:02.647998  576117 pod_ready.go:93] pod "kube-controller-manager-pause-594992" in "kube-system" namespace has status "Ready":"True"
	I1205 20:18:02.648040  576117 pod_ready.go:82] duration metric: took 399.543946ms for pod "kube-controller-manager-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:02.648057  576117 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jxr6b" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:03.049235  576117 pod_ready.go:93] pod "kube-proxy-jxr6b" in "kube-system" namespace has status "Ready":"True"
	I1205 20:18:03.049259  576117 pod_ready.go:82] duration metric: took 401.194648ms for pod "kube-proxy-jxr6b" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:03.049269  576117 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:03.448952  576117 pod_ready.go:93] pod "kube-scheduler-pause-594992" in "kube-system" namespace has status "Ready":"True"
	I1205 20:18:03.448977  576117 pod_ready.go:82] duration metric: took 399.701483ms for pod "kube-scheduler-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:03.448986  576117 pod_ready.go:39] duration metric: took 2.573841516s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:18:03.449003  576117 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:18:03.449054  576117 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:18:03.466422  576117 api_server.go:72] duration metric: took 2.776987485s to wait for apiserver process to appear ...
	I1205 20:18:03.466457  576117 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:18:03.466485  576117 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I1205 20:18:03.471057  576117 api_server.go:279] https://192.168.50.246:8443/healthz returned 200:
	ok
	I1205 20:18:03.471994  576117 api_server.go:141] control plane version: v1.31.2
	I1205 20:18:03.472016  576117 api_server.go:131] duration metric: took 5.551537ms to wait for apiserver health ...
	I1205 20:18:03.472024  576117 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:18:03.650665  576117 system_pods.go:59] 6 kube-system pods found
	I1205 20:18:03.650702  576117 system_pods.go:61] "coredns-7c65d6cfc9-x529d" [0c29f67b-db11-4444-a0ed-18a831e6a5fe] Running
	I1205 20:18:03.650710  576117 system_pods.go:61] "etcd-pause-594992" [bcc74ca9-37f0-4ab7-a3b9-a53b2d524754] Running
	I1205 20:18:03.650715  576117 system_pods.go:61] "kube-apiserver-pause-594992" [96a47b19-a5be-4ab2-89c9-296af79014cb] Running
	I1205 20:18:03.650719  576117 system_pods.go:61] "kube-controller-manager-pause-594992" [9f906a5e-85d3-4cf3-aeb0-8ad317ba6589] Running
	I1205 20:18:03.650729  576117 system_pods.go:61] "kube-proxy-jxr6b" [45d94ddc-c393-4083-807d-febc10b83bd5] Running
	I1205 20:18:03.650734  576117 system_pods.go:61] "kube-scheduler-pause-594992" [a9cea55a-87f2-4f79-96d3-318229726ded] Running
	I1205 20:18:03.650744  576117 system_pods.go:74] duration metric: took 178.711793ms to wait for pod list to return data ...
	I1205 20:18:03.650758  576117 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:18:03.847033  576117 default_sa.go:45] found service account: "default"
	I1205 20:18:03.847067  576117 default_sa.go:55] duration metric: took 196.299104ms for default service account to be created ...
	I1205 20:18:03.847079  576117 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:18:04.049711  576117 system_pods.go:86] 6 kube-system pods found
	I1205 20:18:04.049751  576117 system_pods.go:89] "coredns-7c65d6cfc9-x529d" [0c29f67b-db11-4444-a0ed-18a831e6a5fe] Running
	I1205 20:18:04.049762  576117 system_pods.go:89] "etcd-pause-594992" [bcc74ca9-37f0-4ab7-a3b9-a53b2d524754] Running
	I1205 20:18:04.049768  576117 system_pods.go:89] "kube-apiserver-pause-594992" [96a47b19-a5be-4ab2-89c9-296af79014cb] Running
	I1205 20:18:04.049775  576117 system_pods.go:89] "kube-controller-manager-pause-594992" [9f906a5e-85d3-4cf3-aeb0-8ad317ba6589] Running
	I1205 20:18:04.049780  576117 system_pods.go:89] "kube-proxy-jxr6b" [45d94ddc-c393-4083-807d-febc10b83bd5] Running
	I1205 20:18:04.049785  576117 system_pods.go:89] "kube-scheduler-pause-594992" [a9cea55a-87f2-4f79-96d3-318229726ded] Running
	I1205 20:18:04.049795  576117 system_pods.go:126] duration metric: took 202.70981ms to wait for k8s-apps to be running ...
	I1205 20:18:04.049804  576117 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:18:04.049862  576117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:18:04.069009  576117 system_svc.go:56] duration metric: took 19.190528ms WaitForService to wait for kubelet
	I1205 20:18:04.069045  576117 kubeadm.go:582] duration metric: took 3.379619872s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:18:04.069103  576117 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:18:04.247603  576117 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:18:04.247632  576117 node_conditions.go:123] node cpu capacity is 2
	I1205 20:18:04.247646  576117 node_conditions.go:105] duration metric: took 178.537968ms to run NodePressure ...
	I1205 20:18:04.247660  576117 start.go:241] waiting for startup goroutines ...
	I1205 20:18:04.247669  576117 start.go:246] waiting for cluster config update ...
	I1205 20:18:04.247679  576117 start.go:255] writing updated cluster config ...
	I1205 20:18:04.248548  576117 ssh_runner.go:195] Run: rm -f paused
	I1205 20:18:04.300672  576117 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 20:18:04.302750  576117 out.go:177] * Done! kubectl is now configured to use "pause-594992" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 20:18:04 pause-594992 crio[2093]: time="2024-12-05 20:18:04.995922433Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733429884995890593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5b247a8-0bb3-4831-8cd4-ab26fee77867 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:18:04 pause-594992 crio[2093]: time="2024-12-05 20:18:04.996591531Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=88290a92-0ab7-4e0d-8c4b-60248aaacd68 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:18:04 pause-594992 crio[2093]: time="2024-12-05 20:18:04.996669482Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=88290a92-0ab7-4e0d-8c4b-60248aaacd68 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:18:04 pause-594992 crio[2093]: time="2024-12-05 20:18:04.996983323Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f0ab945fce0cfc411e685e34d69b7861c565af3b31ef78157c39cb1a4526b3a,PodSandboxId:783af0822a6e8651fcbc696271fbe1a8e36cf3df9c3b3fec49d1e35fc7cc491a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733429869873872676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d94ddc-c393-4083-807d-febc10b83bd5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83504d06bbf4f9abdf00ce5d5e1cc316692b206056054d6f250a9ae65843afbe,PodSandboxId:fa8847f02f6634c6d874721893073b58ae3a82b99d467b2380fdf442ea79fa75,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733429869859822504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x529d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29f67b-db11-4444-a0ed-18a831e6a5fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00a448fe5de6cb26361e360c5526cc48cb624e6557d23a1b481db22406456fec,PodSandboxId:ad44895fc7f80192e4109784a2d134e90b8b8ae8a3d1c70d2ee26a18afb370c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733429866245822008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926cc820a9
957a251337825f993b8655,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f404bc68434b2921a7d68bd1fa4bc5aa9a8bd64bfaa3476c53afb80d203fe4c8,PodSandboxId:df80148bdf512ba2e6a409909fa0e63aa64d82e8a5413b4b305a8dc5a9d357c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733429866275969321,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f087dc577c0adbc17c33185aafd
a754,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59948e40715f67533ef4cf36b4f0b69a232f601aafabaeb14485b8e28c2e41a,PodSandboxId:51ea6c16af67bba544ed4c6698dc543ee976dbba4cdcacee81956d1adf9fe3a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733429866250239749,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0847a0a21829616614
d0fdf19abceba,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ff734496e3eebdee5a7cf70c8fac85080bc8736f14b86b7157aa102294a02e7,PodSandboxId:ad92e06f04c03c444f8c5bdf3bf5f12d00476530340fdcf9c7b8f9afda625ef1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733429866215410030,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 537fa6fa5d15b54f9f387b8c108ee3ae,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:495c73deed76c0d6dbdfd63738005f5bdcb035585abe2d2bf533e9fc5990d163,PodSandboxId:fa8847f02f6634c6d874721893073b58ae3a82b99d467b2380fdf442ea79fa75,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733429850654296819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x529d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29f67b-db11-4444-a0ed-18a831e6a5fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d4b65f2c05d5039cdee981da2fec37671762524ea220af215394d893a9d090e,PodSandboxId:df80148bdf512ba2e6a409909fa0e63aa64d82e8a5413b4b305a8dc5a9d357c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1733429849869245309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kuber
netes.pod.name: kube-scheduler-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f087dc577c0adbc17c33185aafda754,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03439c2516853cca606e7485a51dbd0b7d6d1c2eeb7f602460f4f7399f17ef0b,PodSandboxId:783af0822a6e8651fcbc696271fbe1a8e36cf3df9c3b3fec49d1e35fc7cc491a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1733429849778026060,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-
jxr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d94ddc-c393-4083-807d-febc10b83bd5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:520bd43d560d042506a61ee26beabaae5115f81728340ced635de2657d5fea4f,PodSandboxId:ad92e06f04c03c444f8c5bdf3bf5f12d00476530340fdcf9c7b8f9afda625ef1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1733429849769351069,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-594992,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 537fa6fa5d15b54f9f387b8c108ee3ae,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b496d56cafd2d6a7afe7553c461e588c295e2f6ff2764a4a06e194e1d20399cb,PodSandboxId:51ea6c16af67bba544ed4c6698dc543ee976dbba4cdcacee81956d1adf9fe3a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733429849728807733,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-594992,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: f0847a0a21829616614d0fdf19abceba,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad4a3ea6f36235b3d837268ecdefb24951435a4edf008b112588ba3f5f83916,PodSandboxId:ad44895fc7f80192e4109784a2d134e90b8b8ae8a3d1c70d2ee26a18afb370c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733429849660169225,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 926cc820a9957a251337825f993b8655,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=88290a92-0ab7-4e0d-8c4b-60248aaacd68 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:18:05 pause-594992 crio[2093]: time="2024-12-05 20:18:05.056146652Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4d94686a-2dd6-47e3-b517-800903f7fbd7 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:18:05 pause-594992 crio[2093]: time="2024-12-05 20:18:05.056244656Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4d94686a-2dd6-47e3-b517-800903f7fbd7 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:18:05 pause-594992 crio[2093]: time="2024-12-05 20:18:05.058675702Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e74ed8e1-c586-48c0-8383-3f4240937375 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:18:05 pause-594992 crio[2093]: time="2024-12-05 20:18:05.059283066Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733429885059257181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e74ed8e1-c586-48c0-8383-3f4240937375 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:18:05 pause-594992 crio[2093]: time="2024-12-05 20:18:05.060041806Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=483c4420-4c56-4fa3-a802-7b8429f12597 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:18:05 pause-594992 crio[2093]: time="2024-12-05 20:18:05.060196011Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=483c4420-4c56-4fa3-a802-7b8429f12597 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:18:05 pause-594992 crio[2093]: time="2024-12-05 20:18:05.060521073Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f0ab945fce0cfc411e685e34d69b7861c565af3b31ef78157c39cb1a4526b3a,PodSandboxId:783af0822a6e8651fcbc696271fbe1a8e36cf3df9c3b3fec49d1e35fc7cc491a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733429869873872676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d94ddc-c393-4083-807d-febc10b83bd5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83504d06bbf4f9abdf00ce5d5e1cc316692b206056054d6f250a9ae65843afbe,PodSandboxId:fa8847f02f6634c6d874721893073b58ae3a82b99d467b2380fdf442ea79fa75,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733429869859822504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x529d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29f67b-db11-4444-a0ed-18a831e6a5fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00a448fe5de6cb26361e360c5526cc48cb624e6557d23a1b481db22406456fec,PodSandboxId:ad44895fc7f80192e4109784a2d134e90b8b8ae8a3d1c70d2ee26a18afb370c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733429866245822008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926cc820a9
957a251337825f993b8655,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f404bc68434b2921a7d68bd1fa4bc5aa9a8bd64bfaa3476c53afb80d203fe4c8,PodSandboxId:df80148bdf512ba2e6a409909fa0e63aa64d82e8a5413b4b305a8dc5a9d357c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733429866275969321,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f087dc577c0adbc17c33185aafd
a754,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59948e40715f67533ef4cf36b4f0b69a232f601aafabaeb14485b8e28c2e41a,PodSandboxId:51ea6c16af67bba544ed4c6698dc543ee976dbba4cdcacee81956d1adf9fe3a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733429866250239749,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0847a0a21829616614
d0fdf19abceba,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ff734496e3eebdee5a7cf70c8fac85080bc8736f14b86b7157aa102294a02e7,PodSandboxId:ad92e06f04c03c444f8c5bdf3bf5f12d00476530340fdcf9c7b8f9afda625ef1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733429866215410030,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 537fa6fa5d15b54f9f387b8c108ee3ae,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:495c73deed76c0d6dbdfd63738005f5bdcb035585abe2d2bf533e9fc5990d163,PodSandboxId:fa8847f02f6634c6d874721893073b58ae3a82b99d467b2380fdf442ea79fa75,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733429850654296819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x529d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29f67b-db11-4444-a0ed-18a831e6a5fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d4b65f2c05d5039cdee981da2fec37671762524ea220af215394d893a9d090e,PodSandboxId:df80148bdf512ba2e6a409909fa0e63aa64d82e8a5413b4b305a8dc5a9d357c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1733429849869245309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kuber
netes.pod.name: kube-scheduler-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f087dc577c0adbc17c33185aafda754,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03439c2516853cca606e7485a51dbd0b7d6d1c2eeb7f602460f4f7399f17ef0b,PodSandboxId:783af0822a6e8651fcbc696271fbe1a8e36cf3df9c3b3fec49d1e35fc7cc491a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1733429849778026060,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-
jxr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d94ddc-c393-4083-807d-febc10b83bd5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:520bd43d560d042506a61ee26beabaae5115f81728340ced635de2657d5fea4f,PodSandboxId:ad92e06f04c03c444f8c5bdf3bf5f12d00476530340fdcf9c7b8f9afda625ef1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1733429849769351069,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-594992,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 537fa6fa5d15b54f9f387b8c108ee3ae,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b496d56cafd2d6a7afe7553c461e588c295e2f6ff2764a4a06e194e1d20399cb,PodSandboxId:51ea6c16af67bba544ed4c6698dc543ee976dbba4cdcacee81956d1adf9fe3a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733429849728807733,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-594992,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: f0847a0a21829616614d0fdf19abceba,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad4a3ea6f36235b3d837268ecdefb24951435a4edf008b112588ba3f5f83916,PodSandboxId:ad44895fc7f80192e4109784a2d134e90b8b8ae8a3d1c70d2ee26a18afb370c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733429849660169225,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 926cc820a9957a251337825f993b8655,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=483c4420-4c56-4fa3-a802-7b8429f12597 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:18:05 pause-594992 crio[2093]: time="2024-12-05 20:18:05.111051105Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff3dfd9f-5a1f-4231-9d4a-50155cf16d73 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:18:05 pause-594992 crio[2093]: time="2024-12-05 20:18:05.111198260Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff3dfd9f-5a1f-4231-9d4a-50155cf16d73 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:18:05 pause-594992 crio[2093]: time="2024-12-05 20:18:05.112735965Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=70a52eaa-f205-49b9-9b8d-dd25871edda1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:18:05 pause-594992 crio[2093]: time="2024-12-05 20:18:05.113216012Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733429885113189990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=70a52eaa-f205-49b9-9b8d-dd25871edda1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:18:05 pause-594992 crio[2093]: time="2024-12-05 20:18:05.113830206Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d7615b2-f96a-4aa0-b9e2-cdba89b03437 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:18:05 pause-594992 crio[2093]: time="2024-12-05 20:18:05.114068485Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d7615b2-f96a-4aa0-b9e2-cdba89b03437 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:18:05 pause-594992 crio[2093]: time="2024-12-05 20:18:05.114441479Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f0ab945fce0cfc411e685e34d69b7861c565af3b31ef78157c39cb1a4526b3a,PodSandboxId:783af0822a6e8651fcbc696271fbe1a8e36cf3df9c3b3fec49d1e35fc7cc491a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733429869873872676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d94ddc-c393-4083-807d-febc10b83bd5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83504d06bbf4f9abdf00ce5d5e1cc316692b206056054d6f250a9ae65843afbe,PodSandboxId:fa8847f02f6634c6d874721893073b58ae3a82b99d467b2380fdf442ea79fa75,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733429869859822504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x529d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29f67b-db11-4444-a0ed-18a831e6a5fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00a448fe5de6cb26361e360c5526cc48cb624e6557d23a1b481db22406456fec,PodSandboxId:ad44895fc7f80192e4109784a2d134e90b8b8ae8a3d1c70d2ee26a18afb370c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733429866245822008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926cc820a9
957a251337825f993b8655,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f404bc68434b2921a7d68bd1fa4bc5aa9a8bd64bfaa3476c53afb80d203fe4c8,PodSandboxId:df80148bdf512ba2e6a409909fa0e63aa64d82e8a5413b4b305a8dc5a9d357c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733429866275969321,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f087dc577c0adbc17c33185aafd
a754,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59948e40715f67533ef4cf36b4f0b69a232f601aafabaeb14485b8e28c2e41a,PodSandboxId:51ea6c16af67bba544ed4c6698dc543ee976dbba4cdcacee81956d1adf9fe3a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733429866250239749,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0847a0a21829616614
d0fdf19abceba,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ff734496e3eebdee5a7cf70c8fac85080bc8736f14b86b7157aa102294a02e7,PodSandboxId:ad92e06f04c03c444f8c5bdf3bf5f12d00476530340fdcf9c7b8f9afda625ef1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733429866215410030,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 537fa6fa5d15b54f9f387b8c108ee3ae,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:495c73deed76c0d6dbdfd63738005f5bdcb035585abe2d2bf533e9fc5990d163,PodSandboxId:fa8847f02f6634c6d874721893073b58ae3a82b99d467b2380fdf442ea79fa75,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733429850654296819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x529d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29f67b-db11-4444-a0ed-18a831e6a5fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d4b65f2c05d5039cdee981da2fec37671762524ea220af215394d893a9d090e,PodSandboxId:df80148bdf512ba2e6a409909fa0e63aa64d82e8a5413b4b305a8dc5a9d357c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1733429849869245309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kuber
netes.pod.name: kube-scheduler-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f087dc577c0adbc17c33185aafda754,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03439c2516853cca606e7485a51dbd0b7d6d1c2eeb7f602460f4f7399f17ef0b,PodSandboxId:783af0822a6e8651fcbc696271fbe1a8e36cf3df9c3b3fec49d1e35fc7cc491a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1733429849778026060,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-
jxr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d94ddc-c393-4083-807d-febc10b83bd5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:520bd43d560d042506a61ee26beabaae5115f81728340ced635de2657d5fea4f,PodSandboxId:ad92e06f04c03c444f8c5bdf3bf5f12d00476530340fdcf9c7b8f9afda625ef1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1733429849769351069,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-594992,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 537fa6fa5d15b54f9f387b8c108ee3ae,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b496d56cafd2d6a7afe7553c461e588c295e2f6ff2764a4a06e194e1d20399cb,PodSandboxId:51ea6c16af67bba544ed4c6698dc543ee976dbba4cdcacee81956d1adf9fe3a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733429849728807733,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-594992,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: f0847a0a21829616614d0fdf19abceba,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad4a3ea6f36235b3d837268ecdefb24951435a4edf008b112588ba3f5f83916,PodSandboxId:ad44895fc7f80192e4109784a2d134e90b8b8ae8a3d1c70d2ee26a18afb370c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733429849660169225,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 926cc820a9957a251337825f993b8655,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d7615b2-f96a-4aa0-b9e2-cdba89b03437 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:18:05 pause-594992 crio[2093]: time="2024-12-05 20:18:05.173642173Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=552009c9-5079-441b-a668-47e2f84e7017 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:18:05 pause-594992 crio[2093]: time="2024-12-05 20:18:05.173738184Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=552009c9-5079-441b-a668-47e2f84e7017 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:18:05 pause-594992 crio[2093]: time="2024-12-05 20:18:05.175416833Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f7e62bf8-7b80-4ce3-be6a-29e2a75b5445 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:18:05 pause-594992 crio[2093]: time="2024-12-05 20:18:05.175788149Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733429885175765674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f7e62bf8-7b80-4ce3-be6a-29e2a75b5445 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:18:05 pause-594992 crio[2093]: time="2024-12-05 20:18:05.176521096Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d987ac18-5cd7-48a9-89ff-6566098b42d4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:18:05 pause-594992 crio[2093]: time="2024-12-05 20:18:05.176597472Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d987ac18-5cd7-48a9-89ff-6566098b42d4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:18:05 pause-594992 crio[2093]: time="2024-12-05 20:18:05.176829207Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f0ab945fce0cfc411e685e34d69b7861c565af3b31ef78157c39cb1a4526b3a,PodSandboxId:783af0822a6e8651fcbc696271fbe1a8e36cf3df9c3b3fec49d1e35fc7cc491a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733429869873872676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d94ddc-c393-4083-807d-febc10b83bd5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83504d06bbf4f9abdf00ce5d5e1cc316692b206056054d6f250a9ae65843afbe,PodSandboxId:fa8847f02f6634c6d874721893073b58ae3a82b99d467b2380fdf442ea79fa75,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733429869859822504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x529d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29f67b-db11-4444-a0ed-18a831e6a5fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00a448fe5de6cb26361e360c5526cc48cb624e6557d23a1b481db22406456fec,PodSandboxId:ad44895fc7f80192e4109784a2d134e90b8b8ae8a3d1c70d2ee26a18afb370c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733429866245822008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926cc820a9
957a251337825f993b8655,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f404bc68434b2921a7d68bd1fa4bc5aa9a8bd64bfaa3476c53afb80d203fe4c8,PodSandboxId:df80148bdf512ba2e6a409909fa0e63aa64d82e8a5413b4b305a8dc5a9d357c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733429866275969321,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f087dc577c0adbc17c33185aafd
a754,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59948e40715f67533ef4cf36b4f0b69a232f601aafabaeb14485b8e28c2e41a,PodSandboxId:51ea6c16af67bba544ed4c6698dc543ee976dbba4cdcacee81956d1adf9fe3a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733429866250239749,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0847a0a21829616614
d0fdf19abceba,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ff734496e3eebdee5a7cf70c8fac85080bc8736f14b86b7157aa102294a02e7,PodSandboxId:ad92e06f04c03c444f8c5bdf3bf5f12d00476530340fdcf9c7b8f9afda625ef1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733429866215410030,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 537fa6fa5d15b54f9f387b8c108ee3ae,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:495c73deed76c0d6dbdfd63738005f5bdcb035585abe2d2bf533e9fc5990d163,PodSandboxId:fa8847f02f6634c6d874721893073b58ae3a82b99d467b2380fdf442ea79fa75,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733429850654296819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x529d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29f67b-db11-4444-a0ed-18a831e6a5fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d4b65f2c05d5039cdee981da2fec37671762524ea220af215394d893a9d090e,PodSandboxId:df80148bdf512ba2e6a409909fa0e63aa64d82e8a5413b4b305a8dc5a9d357c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1733429849869245309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kuber
netes.pod.name: kube-scheduler-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f087dc577c0adbc17c33185aafda754,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03439c2516853cca606e7485a51dbd0b7d6d1c2eeb7f602460f4f7399f17ef0b,PodSandboxId:783af0822a6e8651fcbc696271fbe1a8e36cf3df9c3b3fec49d1e35fc7cc491a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1733429849778026060,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-
jxr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d94ddc-c393-4083-807d-febc10b83bd5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:520bd43d560d042506a61ee26beabaae5115f81728340ced635de2657d5fea4f,PodSandboxId:ad92e06f04c03c444f8c5bdf3bf5f12d00476530340fdcf9c7b8f9afda625ef1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1733429849769351069,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-594992,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 537fa6fa5d15b54f9f387b8c108ee3ae,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b496d56cafd2d6a7afe7553c461e588c295e2f6ff2764a4a06e194e1d20399cb,PodSandboxId:51ea6c16af67bba544ed4c6698dc543ee976dbba4cdcacee81956d1adf9fe3a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733429849728807733,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-594992,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: f0847a0a21829616614d0fdf19abceba,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad4a3ea6f36235b3d837268ecdefb24951435a4edf008b112588ba3f5f83916,PodSandboxId:ad44895fc7f80192e4109784a2d134e90b8b8ae8a3d1c70d2ee26a18afb370c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733429849660169225,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 926cc820a9957a251337825f993b8655,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d987ac18-5cd7-48a9-89ff-6566098b42d4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4f0ab945fce0c       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   15 seconds ago      Running             kube-proxy                2                   783af0822a6e8       kube-proxy-jxr6b
	83504d06bbf4f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 seconds ago      Running             coredns                   2                   fa8847f02f663       coredns-7c65d6cfc9-x529d
	f404bc68434b2       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   18 seconds ago      Running             kube-scheduler            2                   df80148bdf512       kube-scheduler-pause-594992
	d59948e40715f       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   19 seconds ago      Running             kube-controller-manager   2                   51ea6c16af67b       kube-controller-manager-pause-594992
	00a448fe5de6c       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   19 seconds ago      Running             kube-apiserver            2                   ad44895fc7f80       kube-apiserver-pause-594992
	9ff734496e3ee       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   19 seconds ago      Running             etcd                      2                   ad92e06f04c03       etcd-pause-594992
	495c73deed76c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   34 seconds ago      Exited              coredns                   1                   fa8847f02f663       coredns-7c65d6cfc9-x529d
	5d4b65f2c05d5       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   35 seconds ago      Exited              kube-scheduler            1                   df80148bdf512       kube-scheduler-pause-594992
	03439c2516853       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   35 seconds ago      Exited              kube-proxy                1                   783af0822a6e8       kube-proxy-jxr6b
	520bd43d560d0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   35 seconds ago      Exited              etcd                      1                   ad92e06f04c03       etcd-pause-594992
	b496d56cafd2d       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   35 seconds ago      Exited              kube-controller-manager   1                   51ea6c16af67b       kube-controller-manager-pause-594992
	8ad4a3ea6f362       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   35 seconds ago      Exited              kube-apiserver            1                   ad44895fc7f80       kube-apiserver-pause-594992
	
	
	==> coredns [495c73deed76c0d6dbdfd63738005f5bdcb035585abe2d2bf533e9fc5990d163] <==
	
	
	==> coredns [83504d06bbf4f9abdf00ce5d5e1cc316692b206056054d6f250a9ae65843afbe] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48267 - 31383 "HINFO IN 6921888526448902377.4476785710858164741. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022561039s
	
	
	==> describe nodes <==
	Name:               pause-594992
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-594992
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=pause-594992
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T20_17_08_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 20:17:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-594992
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 20:17:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 20:17:49 +0000   Thu, 05 Dec 2024 20:17:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 20:17:49 +0000   Thu, 05 Dec 2024 20:17:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 20:17:49 +0000   Thu, 05 Dec 2024 20:17:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 20:17:49 +0000   Thu, 05 Dec 2024 20:17:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.246
	  Hostname:    pause-594992
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 7cecd04d00d5479395187937774b0a3a
	  System UUID:                7cecd04d-00d5-4793-9518-7937774b0a3a
	  Boot ID:                    beaf7767-4f39-4420-a660-294211704c2d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-x529d                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     52s
	  kube-system                 etcd-pause-594992                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         57s
	  kube-system                 kube-apiserver-pause-594992             250m (12%)    0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-controller-manager-pause-594992    200m (10%)    0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-proxy-jxr6b                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 kube-scheduler-pause-594992             100m (5%)     0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 51s                kube-proxy       
	  Normal  Starting                 15s                kube-proxy       
	  Normal  NodeHasSufficientPID     67s (x7 over 67s)  kubelet          Node pause-594992 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    67s (x8 over 67s)  kubelet          Node pause-594992 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  67s (x8 over 67s)  kubelet          Node pause-594992 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  67s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                57s                kubelet          Node pause-594992 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  57s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  57s                kubelet          Node pause-594992 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s                kubelet          Node pause-594992 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s                kubelet          Node pause-594992 status is now: NodeHasSufficientPID
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           53s                node-controller  Node pause-594992 event: Registered Node pause-594992 in Controller
	  Normal  Starting                 20s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s (x8 over 20s)  kubelet          Node pause-594992 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 20s)  kubelet          Node pause-594992 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 20s)  kubelet          Node pause-594992 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13s                node-controller  Node pause-594992 event: Registered Node pause-594992 in Controller
	
	
	==> dmesg <==
	[ +10.579173] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.067290] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.082615] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.194178] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.145463] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.329912] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +4.652389] systemd-fstab-generator[740]: Ignoring "noauto" option for root device
	[  +0.063280] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.417088] systemd-fstab-generator[875]: Ignoring "noauto" option for root device
	[  +1.003817] kauditd_printk_skb: 57 callbacks suppressed
	[Dec 5 20:17] kauditd_printk_skb: 30 callbacks suppressed
	[  +1.107554] systemd-fstab-generator[1220]: Ignoring "noauto" option for root device
	[  +4.615621] systemd-fstab-generator[1350]: Ignoring "noauto" option for root device
	[  +0.094239] kauditd_printk_skb: 15 callbacks suppressed
	[ +14.703075] systemd-fstab-generator[2017]: Ignoring "noauto" option for root device
	[  +0.070069] kauditd_printk_skb: 67 callbacks suppressed
	[  +0.060934] systemd-fstab-generator[2029]: Ignoring "noauto" option for root device
	[  +0.166088] systemd-fstab-generator[2043]: Ignoring "noauto" option for root device
	[  +0.143077] systemd-fstab-generator[2055]: Ignoring "noauto" option for root device
	[  +0.335172] systemd-fstab-generator[2084]: Ignoring "noauto" option for root device
	[  +1.114512] systemd-fstab-generator[2208]: Ignoring "noauto" option for root device
	[  +4.387595] kauditd_printk_skb: 196 callbacks suppressed
	[ +11.802804] systemd-fstab-generator[3063]: Ignoring "noauto" option for root device
	[  +7.462104] kauditd_printk_skb: 51 callbacks suppressed
	[Dec 5 20:18] systemd-fstab-generator[3513]: Ignoring "noauto" option for root device
	
	
	==> etcd [520bd43d560d042506a61ee26beabaae5115f81728340ced635de2657d5fea4f] <==
	{"level":"info","ts":"2024-12-05T20:17:31.486981Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26a48da650cf9008 received MsgPreVoteResp from 26a48da650cf9008 at term 2"}
	{"level":"info","ts":"2024-12-05T20:17:31.487023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26a48da650cf9008 became candidate at term 3"}
	{"level":"info","ts":"2024-12-05T20:17:31.487047Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26a48da650cf9008 received MsgVoteResp from 26a48da650cf9008 at term 3"}
	{"level":"info","ts":"2024-12-05T20:17:31.487155Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26a48da650cf9008 became leader at term 3"}
	{"level":"info","ts":"2024-12-05T20:17:31.487196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 26a48da650cf9008 elected leader 26a48da650cf9008 at term 3"}
	{"level":"info","ts":"2024-12-05T20:17:31.494457Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T20:17:31.495449Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T20:17:31.496244Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-05T20:17:31.494411Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"26a48da650cf9008","local-member-attributes":"{Name:pause-594992 ClientURLs:[https://192.168.50.246:2379]}","request-path":"/0/members/26a48da650cf9008/attributes","cluster-id":"4445e918310c0aa2","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-05T20:17:31.499632Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T20:17:31.503449Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-05T20:17:31.503523Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-05T20:17:31.504503Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T20:17:31.542352Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.246:2379"}
	{"level":"info","ts":"2024-12-05T20:17:33.535869Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-12-05T20:17:33.535905Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-594992","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.246:2380"],"advertise-client-urls":["https://192.168.50.246:2379"]}
	{"level":"warn","ts":"2024-12-05T20:17:33.535963Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-05T20:17:33.536035Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/12/05 20:17:33 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-12-05T20:17:33.577286Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.246:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-05T20:17:33.577512Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.246:2379: use of closed network connection"}
	{"level":"info","ts":"2024-12-05T20:17:33.577833Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"26a48da650cf9008","current-leader-member-id":"26a48da650cf9008"}
	{"level":"info","ts":"2024-12-05T20:17:33.587506Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.50.246:2380"}
	{"level":"info","ts":"2024-12-05T20:17:33.587643Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.50.246:2380"}
	{"level":"info","ts":"2024-12-05T20:17:33.587671Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-594992","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.246:2380"],"advertise-client-urls":["https://192.168.50.246:2379"]}
	
	
	==> etcd [9ff734496e3eebdee5a7cf70c8fac85080bc8736f14b86b7157aa102294a02e7] <==
	{"level":"info","ts":"2024-12-05T20:17:46.526385Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"4445e918310c0aa2","local-member-id":"26a48da650cf9008","added-peer-id":"26a48da650cf9008","added-peer-peer-urls":["https://192.168.50.246:2380"]}
	{"level":"info","ts":"2024-12-05T20:17:46.526477Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4445e918310c0aa2","local-member-id":"26a48da650cf9008","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T20:17:46.526533Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T20:17:46.542389Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T20:17:46.545364Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-05T20:17:46.545637Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"26a48da650cf9008","initial-advertise-peer-urls":["https://192.168.50.246:2380"],"listen-peer-urls":["https://192.168.50.246:2380"],"advertise-client-urls":["https://192.168.50.246:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.246:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-05T20:17:46.545669Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-05T20:17:46.545767Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.246:2380"}
	{"level":"info","ts":"2024-12-05T20:17:46.545776Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.246:2380"}
	{"level":"info","ts":"2024-12-05T20:17:47.485163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26a48da650cf9008 is starting a new election at term 3"}
	{"level":"info","ts":"2024-12-05T20:17:47.485238Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26a48da650cf9008 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-12-05T20:17:47.485271Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26a48da650cf9008 received MsgPreVoteResp from 26a48da650cf9008 at term 3"}
	{"level":"info","ts":"2024-12-05T20:17:47.485287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26a48da650cf9008 became candidate at term 4"}
	{"level":"info","ts":"2024-12-05T20:17:47.485301Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26a48da650cf9008 received MsgVoteResp from 26a48da650cf9008 at term 4"}
	{"level":"info","ts":"2024-12-05T20:17:47.485317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26a48da650cf9008 became leader at term 4"}
	{"level":"info","ts":"2024-12-05T20:17:47.485325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 26a48da650cf9008 elected leader 26a48da650cf9008 at term 4"}
	{"level":"info","ts":"2024-12-05T20:17:47.493334Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"26a48da650cf9008","local-member-attributes":"{Name:pause-594992 ClientURLs:[https://192.168.50.246:2379]}","request-path":"/0/members/26a48da650cf9008/attributes","cluster-id":"4445e918310c0aa2","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-05T20:17:47.493525Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T20:17:47.494391Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T20:17:47.501232Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-05T20:17:47.509144Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T20:17:47.509383Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-05T20:17:47.509418Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-05T20:17:47.510035Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T20:17:47.511003Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.246:2379"}
	
	
	==> kernel <==
	 20:18:05 up 1 min,  0 users,  load average: 1.34, 0.52, 0.19
	Linux pause-594992 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [00a448fe5de6cb26361e360c5526cc48cb624e6557d23a1b481db22406456fec] <==
	I1205 20:17:49.038999       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1205 20:17:49.039188       1 aggregator.go:171] initial CRD sync complete...
	I1205 20:17:49.043602       1 autoregister_controller.go:144] Starting autoregister controller
	I1205 20:17:49.043709       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1205 20:17:49.095375       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1205 20:17:49.095668       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1205 20:17:49.095714       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1205 20:17:49.097606       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1205 20:17:49.097838       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1205 20:17:49.097870       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1205 20:17:49.098334       1 shared_informer.go:320] Caches are synced for configmaps
	I1205 20:17:49.100165       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1205 20:17:49.100834       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1205 20:17:49.123021       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1205 20:17:49.123208       1 policy_source.go:224] refreshing policies
	I1205 20:17:49.132969       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 20:17:49.143843       1 cache.go:39] Caches are synced for autoregister controller
	I1205 20:17:50.000503       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 20:17:50.952491       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1205 20:17:50.967514       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1205 20:17:51.019565       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1205 20:17:51.077240       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 20:17:51.091816       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 20:17:52.717854       1 controller.go:615] quota admission added evaluator for: endpoints
	I1205 20:17:52.767875       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [8ad4a3ea6f36235b3d837268ecdefb24951435a4edf008b112588ba3f5f83916] <==
	W1205 20:17:43.081011       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.081324       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.086876       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.090331       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.101761       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.155346       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.162952       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.168418       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.201385       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.222197       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.312513       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.369567       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.397445       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.451504       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.479025       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.500633       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.550200       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.570905       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.584307       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.601323       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.659291       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.682436       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.688242       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.712844       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.783413       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [b496d56cafd2d6a7afe7553c461e588c295e2f6ff2764a4a06e194e1d20399cb] <==
	I1205 20:17:31.292795       1 serving.go:386] Generated self-signed cert in-memory
	I1205 20:17:31.768964       1 controllermanager.go:197] "Starting" version="v1.31.2"
	I1205 20:17:31.769020       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:17:31.779987       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1205 20:17:31.780284       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1205 20:17:31.780342       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1205 20:17:31.780490       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-controller-manager [d59948e40715f67533ef4cf36b4f0b69a232f601aafabaeb14485b8e28c2e41a] <==
	I1205 20:17:52.421432       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1205 20:17:52.421462       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1205 20:17:52.421597       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-594992"
	I1205 20:17:52.431209       1 shared_informer.go:320] Caches are synced for service account
	I1205 20:17:52.450549       1 shared_informer.go:320] Caches are synced for daemon sets
	I1205 20:17:52.453856       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1205 20:17:52.457641       1 shared_informer.go:320] Caches are synced for PVC protection
	I1205 20:17:52.462400       1 shared_informer.go:320] Caches are synced for stateful set
	I1205 20:17:52.465255       1 shared_informer.go:320] Caches are synced for ephemeral
	I1205 20:17:52.474194       1 shared_informer.go:320] Caches are synced for attach detach
	I1205 20:17:52.477262       1 shared_informer.go:320] Caches are synced for expand
	I1205 20:17:52.485036       1 shared_informer.go:320] Caches are synced for crt configmap
	I1205 20:17:52.496431       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1205 20:17:52.514268       1 shared_informer.go:320] Caches are synced for persistent volume
	I1205 20:17:52.631197       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="236.812057ms"
	I1205 20:17:52.631366       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="71.094µs"
	I1205 20:17:52.633162       1 shared_informer.go:320] Caches are synced for resource quota
	I1205 20:17:52.638112       1 shared_informer.go:320] Caches are synced for resource quota
	I1205 20:17:52.665442       1 shared_informer.go:320] Caches are synced for taint
	I1205 20:17:52.665673       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1205 20:17:52.665799       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-594992"
	I1205 20:17:52.665850       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1205 20:17:53.047589       1 shared_informer.go:320] Caches are synced for garbage collector
	I1205 20:17:53.047613       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1205 20:17:53.057618       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [03439c2516853cca606e7485a51dbd0b7d6d1c2eeb7f602460f4f7399f17ef0b] <==
	I1205 20:17:31.245123       1 server_linux.go:66] "Using iptables proxy"
	E1205 20:17:31.360229       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 20:17:31.613242       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1205 20:17:33.161621       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.246"]
	E1205 20:17:33.173249       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 20:17:33.329727       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 20:17:33.329972       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 20:17:33.330031       1 server_linux.go:169] "Using iptables Proxier"
	
	
	==> kube-proxy [4f0ab945fce0cfc411e685e34d69b7861c565af3b31ef78157c39cb1a4526b3a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 20:17:50.150397       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1205 20:17:50.163991       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.246"]
	E1205 20:17:50.165547       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 20:17:50.229384       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 20:17:50.229420       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 20:17:50.229443       1 server_linux.go:169] "Using iptables Proxier"
	I1205 20:17:50.232414       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 20:17:50.232724       1 server.go:483] "Version info" version="v1.31.2"
	I1205 20:17:50.232985       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:17:50.234292       1 config.go:199] "Starting service config controller"
	I1205 20:17:50.234348       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 20:17:50.234388       1 config.go:105] "Starting endpoint slice config controller"
	I1205 20:17:50.234404       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 20:17:50.234873       1 config.go:328] "Starting node config controller"
	I1205 20:17:50.234911       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 20:17:50.334657       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 20:17:50.334770       1 shared_informer.go:320] Caches are synced for service config
	I1205 20:17:50.335209       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5d4b65f2c05d5039cdee981da2fec37671762524ea220af215394d893a9d090e] <==
	I1205 20:17:31.641825       1 serving.go:386] Generated self-signed cert in-memory
	I1205 20:17:33.204041       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1205 20:17:33.208262       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1205 20:17:33.214599       1 secure_serving.go:111] Initial population of client CA failed: Get "https://192.168.50.246:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": context canceled
	I1205 20:17:33.215056       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E1205 20:17:33.220336       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	E1205 20:17:33.220480       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f404bc68434b2921a7d68bd1fa4bc5aa9a8bd64bfaa3476c53afb80d203fe4c8] <==
	I1205 20:17:47.160721       1 serving.go:386] Generated self-signed cert in-memory
	W1205 20:17:49.012742       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1205 20:17:49.012782       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 20:17:49.012791       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1205 20:17:49.012797       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1205 20:17:49.072963       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1205 20:17:49.073009       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:17:49.075333       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1205 20:17:49.075441       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 20:17:49.075459       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1205 20:17:49.075471       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1205 20:17:49.175587       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 05 20:17:46 pause-594992 kubelet[3070]: I1205 20:17:46.196118    3070 scope.go:117] "RemoveContainer" containerID="520bd43d560d042506a61ee26beabaae5115f81728340ced635de2657d5fea4f"
	Dec 05 20:17:46 pause-594992 kubelet[3070]: I1205 20:17:46.197457    3070 scope.go:117] "RemoveContainer" containerID="8ad4a3ea6f36235b3d837268ecdefb24951435a4edf008b112588ba3f5f83916"
	Dec 05 20:17:46 pause-594992 kubelet[3070]: I1205 20:17:46.197850    3070 scope.go:117] "RemoveContainer" containerID="b496d56cafd2d6a7afe7553c461e588c295e2f6ff2764a4a06e194e1d20399cb"
	Dec 05 20:17:46 pause-594992 kubelet[3070]: I1205 20:17:46.205482    3070 scope.go:117] "RemoveContainer" containerID="5d4b65f2c05d5039cdee981da2fec37671762524ea220af215394d893a9d090e"
	Dec 05 20:17:46 pause-594992 kubelet[3070]: I1205 20:17:46.418572    3070 kubelet_node_status.go:72] "Attempting to register node" node="pause-594992"
	Dec 05 20:17:46 pause-594992 kubelet[3070]: E1205 20:17:46.420732    3070 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.246:8443: connect: connection refused" node="pause-594992"
	Dec 05 20:17:46 pause-594992 kubelet[3070]: W1205 20:17:46.453423    3070 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.50.246:8443: connect: connection refused
	Dec 05 20:17:46 pause-594992 kubelet[3070]: E1205 20:17:46.453511    3070 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.50.246:8443: connect: connection refused" logger="UnhandledError"
	Dec 05 20:17:46 pause-594992 kubelet[3070]: W1205 20:17:46.540237    3070 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.246:8443: connect: connection refused
	Dec 05 20:17:46 pause-594992 kubelet[3070]: E1205 20:17:46.540332    3070 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.50.246:8443: connect: connection refused" logger="UnhandledError"
	Dec 05 20:17:47 pause-594992 kubelet[3070]: I1205 20:17:47.222815    3070 kubelet_node_status.go:72] "Attempting to register node" node="pause-594992"
	Dec 05 20:17:49 pause-594992 kubelet[3070]: I1205 20:17:49.162004    3070 kubelet_node_status.go:111] "Node was previously registered" node="pause-594992"
	Dec 05 20:17:49 pause-594992 kubelet[3070]: I1205 20:17:49.162136    3070 kubelet_node_status.go:75] "Successfully registered node" node="pause-594992"
	Dec 05 20:17:49 pause-594992 kubelet[3070]: I1205 20:17:49.162167    3070 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 05 20:17:49 pause-594992 kubelet[3070]: I1205 20:17:49.163348    3070 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 05 20:17:49 pause-594992 kubelet[3070]: I1205 20:17:49.534643    3070 apiserver.go:52] "Watching apiserver"
	Dec 05 20:17:49 pause-594992 kubelet[3070]: I1205 20:17:49.574266    3070 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 05 20:17:49 pause-594992 kubelet[3070]: I1205 20:17:49.655149    3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45d94ddc-c393-4083-807d-febc10b83bd5-lib-modules\") pod \"kube-proxy-jxr6b\" (UID: \"45d94ddc-c393-4083-807d-febc10b83bd5\") " pod="kube-system/kube-proxy-jxr6b"
	Dec 05 20:17:49 pause-594992 kubelet[3070]: I1205 20:17:49.655250    3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45d94ddc-c393-4083-807d-febc10b83bd5-xtables-lock\") pod \"kube-proxy-jxr6b\" (UID: \"45d94ddc-c393-4083-807d-febc10b83bd5\") " pod="kube-system/kube-proxy-jxr6b"
	Dec 05 20:17:49 pause-594992 kubelet[3070]: I1205 20:17:49.840634    3070 scope.go:117] "RemoveContainer" containerID="495c73deed76c0d6dbdfd63738005f5bdcb035585abe2d2bf533e9fc5990d163"
	Dec 05 20:17:49 pause-594992 kubelet[3070]: I1205 20:17:49.843787    3070 scope.go:117] "RemoveContainer" containerID="03439c2516853cca606e7485a51dbd0b7d6d1c2eeb7f602460f4f7399f17ef0b"
	Dec 05 20:17:55 pause-594992 kubelet[3070]: E1205 20:17:55.726927    3070 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733429875726658868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:17:55 pause-594992 kubelet[3070]: E1205 20:17:55.726979    3070 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733429875726658868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:18:05 pause-594992 kubelet[3070]: E1205 20:18:05.732721    3070 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733429885732312972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:18:05 pause-594992 kubelet[3070]: E1205 20:18:05.732754    3070 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733429885732312972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-594992 -n pause-594992
helpers_test.go:261: (dbg) Run:  kubectl --context pause-594992 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-594992 -n pause-594992
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-594992 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-594992 logs -n 25: (1.479572028s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p scheduled-stop-898791       | scheduled-stop-898791     | jenkins | v1.34.0 | 05 Dec 24 20:13 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-898791       | scheduled-stop-898791     | jenkins | v1.34.0 | 05 Dec 24 20:13 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-898791       | scheduled-stop-898791     | jenkins | v1.34.0 | 05 Dec 24 20:13 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-898791       | scheduled-stop-898791     | jenkins | v1.34.0 | 05 Dec 24 20:13 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-898791       | scheduled-stop-898791     | jenkins | v1.34.0 | 05 Dec 24 20:13 UTC | 05 Dec 24 20:13 UTC |
	|         | --cancel-scheduled             |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-898791       | scheduled-stop-898791     | jenkins | v1.34.0 | 05 Dec 24 20:13 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-898791       | scheduled-stop-898791     | jenkins | v1.34.0 | 05 Dec 24 20:13 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-898791       | scheduled-stop-898791     | jenkins | v1.34.0 | 05 Dec 24 20:13 UTC | 05 Dec 24 20:13 UTC |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-898791       | scheduled-stop-898791     | jenkins | v1.34.0 | 05 Dec 24 20:14 UTC | 05 Dec 24 20:14 UTC |
	| start   | -p kubernetes-upgrade-886958   | kubernetes-upgrade-886958 | jenkins | v1.34.0 | 05 Dec 24 20:14 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p offline-crio-974924         | offline-crio-974924       | jenkins | v1.34.0 | 05 Dec 24 20:14 UTC | 05 Dec 24 20:16 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-617890      | minikube                  | jenkins | v1.26.0 | 05 Dec 24 20:14 UTC | 05 Dec 24 20:16 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-899594      | minikube                  | jenkins | v1.26.0 | 05 Dec 24 20:14 UTC | 05 Dec 24 20:15 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-899594 stop    | minikube                  | jenkins | v1.26.0 | 05 Dec 24 20:15 UTC | 05 Dec 24 20:16 UTC |
	| start   | -p stopped-upgrade-899594      | stopped-upgrade-899594    | jenkins | v1.34.0 | 05 Dec 24 20:16 UTC | 05 Dec 24 20:16 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p offline-crio-974924         | offline-crio-974924       | jenkins | v1.34.0 | 05 Dec 24 20:16 UTC | 05 Dec 24 20:16 UTC |
	| start   | -p pause-594992 --memory=2048  | pause-594992              | jenkins | v1.34.0 | 05 Dec 24 20:16 UTC | 05 Dec 24 20:17 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-617890      | running-upgrade-617890    | jenkins | v1.34.0 | 05 Dec 24 20:16 UTC | 05 Dec 24 20:18 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-899594      | stopped-upgrade-899594    | jenkins | v1.34.0 | 05 Dec 24 20:16 UTC | 05 Dec 24 20:16 UTC |
	| start   | -p NoKubernetes-739327         | NoKubernetes-739327       | jenkins | v1.34.0 | 05 Dec 24 20:16 UTC |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-739327         | NoKubernetes-739327       | jenkins | v1.34.0 | 05 Dec 24 20:16 UTC | 05 Dec 24 20:17 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-594992                | pause-594992              | jenkins | v1.34.0 | 05 Dec 24 20:17 UTC | 05 Dec 24 20:18 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-739327         | NoKubernetes-739327       | jenkins | v1.34.0 | 05 Dec 24 20:17 UTC | 05 Dec 24 20:17 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-739327         | NoKubernetes-739327       | jenkins | v1.34.0 | 05 Dec 24 20:17 UTC | 05 Dec 24 20:17 UTC |
	| start   | -p NoKubernetes-739327         | NoKubernetes-739327       | jenkins | v1.34.0 | 05 Dec 24 20:17 UTC |                     |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 20:17:43
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:17:43.266397  576500 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:17:43.266503  576500 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:17:43.266507  576500 out.go:358] Setting ErrFile to fd 2...
	I1205 20:17:43.266510  576500 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:17:43.266706  576500 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 20:17:43.267275  576500 out.go:352] Setting JSON to false
	I1205 20:17:43.268382  576500 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":10809,"bootTime":1733419054,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:17:43.268477  576500 start.go:139] virtualization: kvm guest
	I1205 20:17:43.270981  576500 out.go:177] * [NoKubernetes-739327] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:17:43.272498  576500 notify.go:220] Checking for updates...
	I1205 20:17:43.272502  576500 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 20:17:43.274104  576500 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:17:43.275581  576500 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:17:43.277074  576500 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 20:17:43.278674  576500 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:17:43.280138  576500 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:17:43.282205  576500 config.go:182] Loaded profile config "kubernetes-upgrade-886958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 20:17:43.282408  576500 config.go:182] Loaded profile config "pause-594992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:17:43.282527  576500 config.go:182] Loaded profile config "running-upgrade-617890": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1205 20:17:43.282555  576500 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1205 20:17:43.282682  576500 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:17:43.320943  576500 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 20:17:43.322392  576500 start.go:297] selected driver: kvm2
	I1205 20:17:43.322403  576500 start.go:901] validating driver "kvm2" against <nil>
	I1205 20:17:43.322420  576500 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:17:43.322818  576500 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1205 20:17:43.322922  576500 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:17:43.323017  576500 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20052-530897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:17:43.339246  576500 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 20:17:43.339289  576500 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 20:17:43.339809  576500 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1205 20:17:43.339960  576500 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 20:17:43.339982  576500 cni.go:84] Creating CNI manager for ""
	I1205 20:17:43.340027  576500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:17:43.340031  576500 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 20:17:43.340042  576500 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1205 20:17:43.340114  576500 start.go:340] cluster config:
	{Name:NoKubernetes-739327 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-739327 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:17:43.340256  576500 iso.go:125] acquiring lock: {Name:mk778929df466edaca8cb6d38427acedfae32b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:17:43.342091  576500 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-739327
	I1205 20:17:43.343298  576500 preload.go:131] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W1205 20:17:43.453219  576500 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1205 20:17:43.453421  576500 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/NoKubernetes-739327/config.json ...
	I1205 20:17:43.453454  576500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/NoKubernetes-739327/config.json: {Name:mk3972a45e368dbc345926c535f87626ea849c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:17:43.453597  576500 start.go:360] acquireMachinesLock for NoKubernetes-739327: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:17:43.453626  576500 start.go:364] duration metric: took 19.985µs to acquireMachinesLock for "NoKubernetes-739327"
	I1205 20:17:43.453636  576500 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-739327 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-739327 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:17:43.453694  576500 start.go:125] createHost starting for "" (driver="kvm2")
	I1205 20:17:43.920387  576117 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 495c73deed76c0d6dbdfd63738005f5bdcb035585abe2d2bf533e9fc5990d163 5d4b65f2c05d5039cdee981da2fec37671762524ea220af215394d893a9d090e 03439c2516853cca606e7485a51dbd0b7d6d1c2eeb7f602460f4f7399f17ef0b 520bd43d560d042506a61ee26beabaae5115f81728340ced635de2657d5fea4f b496d56cafd2d6a7afe7553c461e588c295e2f6ff2764a4a06e194e1d20399cb 8ad4a3ea6f36235b3d837268ecdefb24951435a4edf008b112588ba3f5f83916 1b7784163a2d0c3ff601cfa74dddb5bc0dff81deb3f01c7fc1f26feba42387d3 a0585ef4ee5ad21ecbfa844d67bbca5d1fecf69dad43cfa7ac6126bdf42997a0 2886efe6ebde53691a3e99cfe076bbafeb217dc2edeaa371f7099189d74a5fa6 5fc0b5765d3e201741369457b198bb9ec5a61a5675008e978e435957501f01f8 05b4a3bd5214c727f059ec8c2342426f28fd49d9a43bdb17d7fdaa7477b4a723 1e5850bc705289c2026062e6bb62731933aade93243ad68fa62e12b574758614: (12.843041525s)
	W1205 20:17:43.920501  576117 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 495c73deed76c0d6dbdfd63738005f5bdcb035585abe2d2bf533e9fc5990d163 5d4b65f2c05d5039cdee981da2fec37671762524ea220af215394d893a9d090e 03439c2516853cca606e7485a51dbd0b7d6d1c2eeb7f602460f4f7399f17ef0b 520bd43d560d042506a61ee26beabaae5115f81728340ced635de2657d5fea4f b496d56cafd2d6a7afe7553c461e588c295e2f6ff2764a4a06e194e1d20399cb 8ad4a3ea6f36235b3d837268ecdefb24951435a4edf008b112588ba3f5f83916 1b7784163a2d0c3ff601cfa74dddb5bc0dff81deb3f01c7fc1f26feba42387d3 a0585ef4ee5ad21ecbfa844d67bbca5d1fecf69dad43cfa7ac6126bdf42997a0 2886efe6ebde53691a3e99cfe076bbafeb217dc2edeaa371f7099189d74a5fa6 5fc0b5765d3e201741369457b198bb9ec5a61a5675008e978e435957501f01f8 05b4a3bd5214c727f059ec8c2342426f28fd49d9a43bdb17d7fdaa7477b4a723 1e5850bc705289c2026062e6bb62731933aade93243ad68fa62e12b574758614: Process exited with status 1
	stdout:
	495c73deed76c0d6dbdfd63738005f5bdcb035585abe2d2bf533e9fc5990d163
	5d4b65f2c05d5039cdee981da2fec37671762524ea220af215394d893a9d090e
	03439c2516853cca606e7485a51dbd0b7d6d1c2eeb7f602460f4f7399f17ef0b
	520bd43d560d042506a61ee26beabaae5115f81728340ced635de2657d5fea4f
	b496d56cafd2d6a7afe7553c461e588c295e2f6ff2764a4a06e194e1d20399cb
	8ad4a3ea6f36235b3d837268ecdefb24951435a4edf008b112588ba3f5f83916
	
	stderr:
	E1205 20:17:43.901177    2814 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b7784163a2d0c3ff601cfa74dddb5bc0dff81deb3f01c7fc1f26feba42387d3\": container with ID starting with 1b7784163a2d0c3ff601cfa74dddb5bc0dff81deb3f01c7fc1f26feba42387d3 not found: ID does not exist" containerID="1b7784163a2d0c3ff601cfa74dddb5bc0dff81deb3f01c7fc1f26feba42387d3"
	time="2024-12-05T20:17:43Z" level=fatal msg="stopping the container \"1b7784163a2d0c3ff601cfa74dddb5bc0dff81deb3f01c7fc1f26feba42387d3\": rpc error: code = NotFound desc = could not find container \"1b7784163a2d0c3ff601cfa74dddb5bc0dff81deb3f01c7fc1f26feba42387d3\": container with ID starting with 1b7784163a2d0c3ff601cfa74dddb5bc0dff81deb3f01c7fc1f26feba42387d3 not found: ID does not exist"
	I1205 20:17:43.920579  576117 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:17:43.968858  576117 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:17:43.980303  576117 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Dec  5 20:16 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Dec  5 20:16 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Dec  5 20:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Dec  5 20:16 /etc/kubernetes/scheduler.conf
	
	I1205 20:17:43.980380  576117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:17:43.990046  576117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:17:44.000032  576117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:17:44.012740  576117 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:17:44.012802  576117 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:17:44.026885  576117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:17:44.037280  576117 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:17:44.037353  576117 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:17:44.047380  576117 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:17:44.058495  576117 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:17:44.115209  576117 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:17:43.456024  576500 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	I1205 20:17:43.456261  576500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:17:43.456334  576500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:17:43.472647  576500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45695
	I1205 20:17:43.473207  576500 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:17:43.473885  576500 main.go:141] libmachine: Using API Version  1
	I1205 20:17:43.473908  576500 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:17:43.474359  576500 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:17:43.474687  576500 main.go:141] libmachine: (NoKubernetes-739327) Calling .GetMachineName
	I1205 20:17:43.474882  576500 main.go:141] libmachine: (NoKubernetes-739327) Calling .DriverName
	I1205 20:17:43.475125  576500 start.go:159] libmachine.API.Create for "NoKubernetes-739327" (driver="kvm2")
	I1205 20:17:43.475165  576500 client.go:168] LocalClient.Create starting
	I1205 20:17:43.475198  576500 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem
	I1205 20:17:43.475235  576500 main.go:141] libmachine: Decoding PEM data...
	I1205 20:17:43.475252  576500 main.go:141] libmachine: Parsing certificate...
	I1205 20:17:43.475319  576500 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem
	I1205 20:17:43.475351  576500 main.go:141] libmachine: Decoding PEM data...
	I1205 20:17:43.475365  576500 main.go:141] libmachine: Parsing certificate...
	I1205 20:17:43.475388  576500 main.go:141] libmachine: Running pre-create checks...
	I1205 20:17:43.475396  576500 main.go:141] libmachine: (NoKubernetes-739327) Calling .PreCreateCheck
	I1205 20:17:43.475834  576500 main.go:141] libmachine: (NoKubernetes-739327) Calling .GetConfigRaw
	I1205 20:17:43.476321  576500 main.go:141] libmachine: Creating machine...
	I1205 20:17:43.476330  576500 main.go:141] libmachine: (NoKubernetes-739327) Calling .Create
	I1205 20:17:43.476513  576500 main.go:141] libmachine: (NoKubernetes-739327) Creating KVM machine...
	I1205 20:17:43.478008  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | found existing default KVM network
	I1205 20:17:43.480166  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:43.479953  576527 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:f5:aa:7e} reservation:<nil>}
	I1205 20:17:43.481435  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:43.481320  576527 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:10:91:27} reservation:<nil>}
	I1205 20:17:43.484172  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:43.484045  576527 network.go:209] skipping subnet 192.168.61.0/24 that is reserved: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1205 20:17:43.485166  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:43.485051  576527 network.go:211] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:66:e0:a4} reservation:<nil>}
	I1205 20:17:43.487537  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:43.486709  576527 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001193f0}
	I1205 20:17:43.487555  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | created network xml: 
	I1205 20:17:43.487565  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | <network>
	I1205 20:17:43.487572  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG |   <name>mk-NoKubernetes-739327</name>
	I1205 20:17:43.487579  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG |   <dns enable='no'/>
	I1205 20:17:43.487585  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG |   
	I1205 20:17:43.487601  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG |   <ip address='192.168.83.1' netmask='255.255.255.0'>
	I1205 20:17:43.487608  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG |     <dhcp>
	I1205 20:17:43.487616  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG |       <range start='192.168.83.2' end='192.168.83.253'/>
	I1205 20:17:43.487628  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG |     </dhcp>
	I1205 20:17:43.487635  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG |   </ip>
	I1205 20:17:43.487640  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG |   
	I1205 20:17:43.487647  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | </network>
	I1205 20:17:43.487652  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | 
	I1205 20:17:43.493671  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | trying to create private KVM network mk-NoKubernetes-739327 192.168.83.0/24...
	I1205 20:17:43.579342  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | private KVM network mk-NoKubernetes-739327 192.168.83.0/24 created
	I1205 20:17:43.579409  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:43.579300  576527 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 20:17:43.579451  576500 main.go:141] libmachine: (NoKubernetes-739327) Setting up store path in /home/jenkins/minikube-integration/20052-530897/.minikube/machines/NoKubernetes-739327 ...
	I1205 20:17:43.579471  576500 main.go:141] libmachine: (NoKubernetes-739327) Building disk image from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 20:17:43.579494  576500 main.go:141] libmachine: (NoKubernetes-739327) Downloading /home/jenkins/minikube-integration/20052-530897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 20:17:43.892195  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:43.892030  576527 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/NoKubernetes-739327/id_rsa...
	I1205 20:17:44.001971  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:44.001821  576527 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/NoKubernetes-739327/NoKubernetes-739327.rawdisk...
	I1205 20:17:44.001997  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | Writing magic tar header
	I1205 20:17:44.002016  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | Writing SSH key tar header
	I1205 20:17:44.002032  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:44.001985  576527 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/NoKubernetes-739327 ...
	I1205 20:17:44.002189  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/NoKubernetes-739327
	I1205 20:17:44.002215  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines
	I1205 20:17:44.002228  576500 main.go:141] libmachine: (NoKubernetes-739327) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/NoKubernetes-739327 (perms=drwx------)
	I1205 20:17:44.002238  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 20:17:44.002249  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897
	I1205 20:17:44.002256  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 20:17:44.002267  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | Checking permissions on dir: /home/jenkins
	I1205 20:17:44.002273  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | Checking permissions on dir: /home
	I1205 20:17:44.002282  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | Skipping /home - not owner
	I1205 20:17:44.002294  576500 main.go:141] libmachine: (NoKubernetes-739327) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines (perms=drwxr-xr-x)
	I1205 20:17:44.002303  576500 main.go:141] libmachine: (NoKubernetes-739327) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube (perms=drwxr-xr-x)
	I1205 20:17:44.002326  576500 main.go:141] libmachine: (NoKubernetes-739327) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897 (perms=drwxrwxr-x)
	I1205 20:17:44.002339  576500 main.go:141] libmachine: (NoKubernetes-739327) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 20:17:44.002349  576500 main.go:141] libmachine: (NoKubernetes-739327) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 20:17:44.002354  576500 main.go:141] libmachine: (NoKubernetes-739327) Creating domain...
	I1205 20:17:44.004060  576500 main.go:141] libmachine: (NoKubernetes-739327) define libvirt domain using xml: 
	I1205 20:17:44.004073  576500 main.go:141] libmachine: (NoKubernetes-739327) <domain type='kvm'>
	I1205 20:17:44.004085  576500 main.go:141] libmachine: (NoKubernetes-739327)   <name>NoKubernetes-739327</name>
	I1205 20:17:44.004092  576500 main.go:141] libmachine: (NoKubernetes-739327)   <memory unit='MiB'>6000</memory>
	I1205 20:17:44.004100  576500 main.go:141] libmachine: (NoKubernetes-739327)   <vcpu>2</vcpu>
	I1205 20:17:44.004109  576500 main.go:141] libmachine: (NoKubernetes-739327)   <features>
	I1205 20:17:44.004115  576500 main.go:141] libmachine: (NoKubernetes-739327)     <acpi/>
	I1205 20:17:44.004119  576500 main.go:141] libmachine: (NoKubernetes-739327)     <apic/>
	I1205 20:17:44.004133  576500 main.go:141] libmachine: (NoKubernetes-739327)     <pae/>
	I1205 20:17:44.004137  576500 main.go:141] libmachine: (NoKubernetes-739327)     
	I1205 20:17:44.004143  576500 main.go:141] libmachine: (NoKubernetes-739327)   </features>
	I1205 20:17:44.004147  576500 main.go:141] libmachine: (NoKubernetes-739327)   <cpu mode='host-passthrough'>
	I1205 20:17:44.004152  576500 main.go:141] libmachine: (NoKubernetes-739327)   
	I1205 20:17:44.004156  576500 main.go:141] libmachine: (NoKubernetes-739327)   </cpu>
	I1205 20:17:44.004162  576500 main.go:141] libmachine: (NoKubernetes-739327)   <os>
	I1205 20:17:44.004173  576500 main.go:141] libmachine: (NoKubernetes-739327)     <type>hvm</type>
	I1205 20:17:44.004179  576500 main.go:141] libmachine: (NoKubernetes-739327)     <boot dev='cdrom'/>
	I1205 20:17:44.004188  576500 main.go:141] libmachine: (NoKubernetes-739327)     <boot dev='hd'/>
	I1205 20:17:44.004198  576500 main.go:141] libmachine: (NoKubernetes-739327)     <bootmenu enable='no'/>
	I1205 20:17:44.004203  576500 main.go:141] libmachine: (NoKubernetes-739327)   </os>
	I1205 20:17:44.004208  576500 main.go:141] libmachine: (NoKubernetes-739327)   <devices>
	I1205 20:17:44.004214  576500 main.go:141] libmachine: (NoKubernetes-739327)     <disk type='file' device='cdrom'>
	I1205 20:17:44.004225  576500 main.go:141] libmachine: (NoKubernetes-739327)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/NoKubernetes-739327/boot2docker.iso'/>
	I1205 20:17:44.004231  576500 main.go:141] libmachine: (NoKubernetes-739327)       <target dev='hdc' bus='scsi'/>
	I1205 20:17:44.004237  576500 main.go:141] libmachine: (NoKubernetes-739327)       <readonly/>
	I1205 20:17:44.004242  576500 main.go:141] libmachine: (NoKubernetes-739327)     </disk>
	I1205 20:17:44.004250  576500 main.go:141] libmachine: (NoKubernetes-739327)     <disk type='file' device='disk'>
	I1205 20:17:44.004257  576500 main.go:141] libmachine: (NoKubernetes-739327)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 20:17:44.004332  576500 main.go:141] libmachine: (NoKubernetes-739327)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/NoKubernetes-739327/NoKubernetes-739327.rawdisk'/>
	I1205 20:17:44.004352  576500 main.go:141] libmachine: (NoKubernetes-739327)       <target dev='hda' bus='virtio'/>
	I1205 20:17:44.004361  576500 main.go:141] libmachine: (NoKubernetes-739327)     </disk>
	I1205 20:17:44.004368  576500 main.go:141] libmachine: (NoKubernetes-739327)     <interface type='network'>
	I1205 20:17:44.004386  576500 main.go:141] libmachine: (NoKubernetes-739327)       <source network='mk-NoKubernetes-739327'/>
	I1205 20:17:44.004392  576500 main.go:141] libmachine: (NoKubernetes-739327)       <model type='virtio'/>
	I1205 20:17:44.004400  576500 main.go:141] libmachine: (NoKubernetes-739327)     </interface>
	I1205 20:17:44.004405  576500 main.go:141] libmachine: (NoKubernetes-739327)     <interface type='network'>
	I1205 20:17:44.004413  576500 main.go:141] libmachine: (NoKubernetes-739327)       <source network='default'/>
	I1205 20:17:44.004419  576500 main.go:141] libmachine: (NoKubernetes-739327)       <model type='virtio'/>
	I1205 20:17:44.004426  576500 main.go:141] libmachine: (NoKubernetes-739327)     </interface>
	I1205 20:17:44.004432  576500 main.go:141] libmachine: (NoKubernetes-739327)     <serial type='pty'>
	I1205 20:17:44.004440  576500 main.go:141] libmachine: (NoKubernetes-739327)       <target port='0'/>
	I1205 20:17:44.004446  576500 main.go:141] libmachine: (NoKubernetes-739327)     </serial>
	I1205 20:17:44.004454  576500 main.go:141] libmachine: (NoKubernetes-739327)     <console type='pty'>
	I1205 20:17:44.004461  576500 main.go:141] libmachine: (NoKubernetes-739327)       <target type='serial' port='0'/>
	I1205 20:17:44.004468  576500 main.go:141] libmachine: (NoKubernetes-739327)     </console>
	I1205 20:17:44.004474  576500 main.go:141] libmachine: (NoKubernetes-739327)     <rng model='virtio'>
	I1205 20:17:44.004482  576500 main.go:141] libmachine: (NoKubernetes-739327)       <backend model='random'>/dev/random</backend>
	I1205 20:17:44.004487  576500 main.go:141] libmachine: (NoKubernetes-739327)     </rng>
	I1205 20:17:44.004493  576500 main.go:141] libmachine: (NoKubernetes-739327)     
	I1205 20:17:44.004498  576500 main.go:141] libmachine: (NoKubernetes-739327)     
	I1205 20:17:44.004505  576500 main.go:141] libmachine: (NoKubernetes-739327)   </devices>
	I1205 20:17:44.004511  576500 main.go:141] libmachine: (NoKubernetes-739327) </domain>
	I1205 20:17:44.004523  576500 main.go:141] libmachine: (NoKubernetes-739327) 
	I1205 20:17:44.009263  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:8d:63:cd in network default
	I1205 20:17:44.010014  576500 main.go:141] libmachine: (NoKubernetes-739327) Ensuring networks are active...
	I1205 20:17:44.010037  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:f4:c9:2d in network mk-NoKubernetes-739327
	I1205 20:17:44.010956  576500 main.go:141] libmachine: (NoKubernetes-739327) Ensuring network default is active
	I1205 20:17:44.011247  576500 main.go:141] libmachine: (NoKubernetes-739327) Ensuring network mk-NoKubernetes-739327 is active
	I1205 20:17:44.011978  576500 main.go:141] libmachine: (NoKubernetes-739327) Getting domain xml...
	I1205 20:17:44.012966  576500 main.go:141] libmachine: (NoKubernetes-739327) Creating domain...
	I1205 20:17:45.318034  576500 main.go:141] libmachine: (NoKubernetes-739327) Waiting to get IP...
	I1205 20:17:45.318936  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:f4:c9:2d in network mk-NoKubernetes-739327
	I1205 20:17:45.319391  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | unable to find current IP address of domain NoKubernetes-739327 in network mk-NoKubernetes-739327
	I1205 20:17:45.319434  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:45.319370  576527 retry.go:31] will retry after 267.529682ms: waiting for machine to come up
	I1205 20:17:45.589154  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:f4:c9:2d in network mk-NoKubernetes-739327
	I1205 20:17:45.589730  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | unable to find current IP address of domain NoKubernetes-739327 in network mk-NoKubernetes-739327
	I1205 20:17:45.589757  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:45.589645  576527 retry.go:31] will retry after 239.95428ms: waiting for machine to come up
	I1205 20:17:45.831212  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:f4:c9:2d in network mk-NoKubernetes-739327
	I1205 20:17:45.831736  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | unable to find current IP address of domain NoKubernetes-739327 in network mk-NoKubernetes-739327
	I1205 20:17:45.831758  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:45.831658  576527 retry.go:31] will retry after 315.686144ms: waiting for machine to come up
	I1205 20:17:46.149152  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:f4:c9:2d in network mk-NoKubernetes-739327
	I1205 20:17:46.149628  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | unable to find current IP address of domain NoKubernetes-739327 in network mk-NoKubernetes-739327
	I1205 20:17:46.149650  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:46.149584  576527 retry.go:31] will retry after 504.61278ms: waiting for machine to come up
	I1205 20:17:46.656468  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:f4:c9:2d in network mk-NoKubernetes-739327
	I1205 20:17:46.657044  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | unable to find current IP address of domain NoKubernetes-739327 in network mk-NoKubernetes-739327
	I1205 20:17:46.657064  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:46.656993  576527 retry.go:31] will retry after 576.866276ms: waiting for machine to come up
	I1205 20:17:47.235804  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:f4:c9:2d in network mk-NoKubernetes-739327
	I1205 20:17:47.236300  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | unable to find current IP address of domain NoKubernetes-739327 in network mk-NoKubernetes-739327
	I1205 20:17:47.236321  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:47.236291  576527 retry.go:31] will retry after 758.40512ms: waiting for machine to come up
	I1205 20:17:47.996023  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:f4:c9:2d in network mk-NoKubernetes-739327
	I1205 20:17:47.996626  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | unable to find current IP address of domain NoKubernetes-739327 in network mk-NoKubernetes-739327
	I1205 20:17:47.996647  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:47.996578  576527 retry.go:31] will retry after 902.687934ms: waiting for machine to come up
	I1205 20:17:45.195989  576117 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.080732641s)
	I1205 20:17:45.196034  576117 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:17:45.450094  576117 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:17:45.532646  576117 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:17:45.717580  576117 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:17:45.717696  576117 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:17:46.217999  576117 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:17:46.718772  576117 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:17:46.748637  576117 api_server.go:72] duration metric: took 1.031055079s to wait for apiserver process to appear ...
	I1205 20:17:46.748673  576117 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:17:46.748701  576117 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I1205 20:17:49.039383  576117 api_server.go:279] https://192.168.50.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:17:49.039418  576117 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:17:49.039436  576117 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I1205 20:17:49.047480  576117 api_server.go:279] https://192.168.50.246:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:17:49.047516  576117 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:17:49.248791  576117 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I1205 20:17:49.254267  576117 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:17:49.254293  576117 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:17:49.749458  576117 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I1205 20:17:49.754410  576117 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:17:49.754438  576117 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:17:50.249505  576117 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I1205 20:17:50.259633  576117 api_server.go:279] https://192.168.50.246:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:17:50.259677  576117 api_server.go:103] status: https://192.168.50.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:17:50.749177  576117 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I1205 20:17:50.763392  576117 api_server.go:279] https://192.168.50.246:8443/healthz returned 200:
	ok
	I1205 20:17:50.771564  576117 api_server.go:141] control plane version: v1.31.2
	I1205 20:17:50.771606  576117 api_server.go:131] duration metric: took 4.022924466s to wait for apiserver health ...
	I1205 20:17:50.771617  576117 cni.go:84] Creating CNI manager for ""
	I1205 20:17:50.771626  576117 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:17:50.773378  576117 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:17:50.928409  575390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	W1205 20:17:51.009918  575390 kubeadm.go:714] addon install failed, wil retry: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns": dial tcp 192.168.72.205:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I1205 20:17:51.009971  575390 kubeadm.go:597] duration metric: took 38.119172235s to restartPrimaryControlPlane
	W1205 20:17:51.010074  575390 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 20:17:51.010112  575390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:17:48.901201  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:f4:c9:2d in network mk-NoKubernetes-739327
	I1205 20:17:48.901731  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | unable to find current IP address of domain NoKubernetes-739327 in network mk-NoKubernetes-739327
	I1205 20:17:48.901748  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:48.901689  576527 retry.go:31] will retry after 1.229707548s: waiting for machine to come up
	I1205 20:17:50.133285  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:f4:c9:2d in network mk-NoKubernetes-739327
	I1205 20:17:50.133845  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | unable to find current IP address of domain NoKubernetes-739327 in network mk-NoKubernetes-739327
	I1205 20:17:50.133870  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:50.133778  576527 retry.go:31] will retry after 1.36134392s: waiting for machine to come up
	I1205 20:17:51.497233  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:f4:c9:2d in network mk-NoKubernetes-739327
	I1205 20:17:51.497707  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | unable to find current IP address of domain NoKubernetes-739327 in network mk-NoKubernetes-739327
	I1205 20:17:51.497738  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:51.497650  576527 retry.go:31] will retry after 1.794206833s: waiting for machine to come up
	I1205 20:17:53.794876  575390 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.784733298s)
	I1205 20:17:53.794975  575390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:17:53.812657  575390 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:17:53.824857  575390 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:17:53.836017  575390 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:17:53.836054  575390 kubeadm.go:157] found existing configuration files:
	
	I1205 20:17:53.836120  575390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf
	I1205 20:17:53.844175  575390 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:17:53.844288  575390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:17:53.855084  575390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf
	I1205 20:17:53.865499  575390 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:17:53.865581  575390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:17:53.876578  575390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf
	I1205 20:17:53.885252  575390 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:17:53.885348  575390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:17:53.896385  575390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf
	I1205 20:17:53.907071  575390 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:17:53.907148  575390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:17:53.918097  575390 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:17:53.959515  575390 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1205 20:17:53.959622  575390 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:17:54.085094  575390 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:17:54.085254  575390 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:17:54.085440  575390 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:17:54.239720  575390 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:17:50.774777  576117 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:17:50.792037  576117 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:17:50.823192  576117 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:17:50.823331  576117 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 20:17:50.823356  576117 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 20:17:50.838187  576117 system_pods.go:59] 6 kube-system pods found
	I1205 20:17:50.838243  576117 system_pods.go:61] "coredns-7c65d6cfc9-x529d" [0c29f67b-db11-4444-a0ed-18a831e6a5fe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:17:50.838257  576117 system_pods.go:61] "etcd-pause-594992" [bcc74ca9-37f0-4ab7-a3b9-a53b2d524754] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:17:50.838268  576117 system_pods.go:61] "kube-apiserver-pause-594992" [96a47b19-a5be-4ab2-89c9-296af79014cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:17:50.838282  576117 system_pods.go:61] "kube-controller-manager-pause-594992" [9f906a5e-85d3-4cf3-aeb0-8ad317ba6589] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:17:50.838297  576117 system_pods.go:61] "kube-proxy-jxr6b" [45d94ddc-c393-4083-807d-febc10b83bd5] Running
	I1205 20:17:50.838310  576117 system_pods.go:61] "kube-scheduler-pause-594992" [a9cea55a-87f2-4f79-96d3-318229726ded] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 20:17:50.838328  576117 system_pods.go:74] duration metric: took 15.105188ms to wait for pod list to return data ...
	I1205 20:17:50.838345  576117 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:17:50.843124  576117 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:17:50.843157  576117 node_conditions.go:123] node cpu capacity is 2
	I1205 20:17:50.843170  576117 node_conditions.go:105] duration metric: took 4.812608ms to run NodePressure ...
	I1205 20:17:50.843197  576117 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:17:51.121331  576117 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 20:17:51.127024  576117 kubeadm.go:739] kubelet initialised
	I1205 20:17:51.127065  576117 kubeadm.go:740] duration metric: took 5.696124ms waiting for restarted kubelet to initialise ...
	I1205 20:17:51.127077  576117 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:17:51.133535  576117 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-x529d" in "kube-system" namespace to be "Ready" ...
	I1205 20:17:51.142930  576117 pod_ready.go:93] pod "coredns-7c65d6cfc9-x529d" in "kube-system" namespace has status "Ready":"True"
	I1205 20:17:51.142953  576117 pod_ready.go:82] duration metric: took 9.393735ms for pod "coredns-7c65d6cfc9-x529d" in "kube-system" namespace to be "Ready" ...
	I1205 20:17:51.142965  576117 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:17:53.151157  576117 pod_ready.go:103] pod "etcd-pause-594992" in "kube-system" namespace has status "Ready":"False"
	I1205 20:17:54.242674  575390 out.go:235]   - Generating certificates and keys ...
	I1205 20:17:54.242778  575390 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:17:54.242908  575390 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:17:54.243048  575390 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:17:54.243141  575390 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:17:54.243258  575390 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:17:54.243337  575390 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 20:17:54.243429  575390 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:17:54.243534  575390 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:17:54.243639  575390 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:17:54.243756  575390 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:17:54.243829  575390 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 20:17:54.243920  575390 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:17:54.413660  575390 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:17:54.707434  575390 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:17:54.807580  575390 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:17:54.944901  575390 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:17:55.028951  575390 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:17:55.029906  575390 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:17:55.029975  575390 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:17:55.181662  575390 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:17:55.183260  575390 out.go:235]   - Booting up control plane ...
	I1205 20:17:55.183426  575390 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:17:55.186833  575390 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:17:55.187807  575390 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:17:55.188641  575390 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:17:55.190430  575390 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:17:53.293376  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:f4:c9:2d in network mk-NoKubernetes-739327
	I1205 20:17:53.293859  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | unable to find current IP address of domain NoKubernetes-739327 in network mk-NoKubernetes-739327
	I1205 20:17:53.293881  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:53.293796  576527 retry.go:31] will retry after 1.905908252s: waiting for machine to come up
	I1205 20:17:55.201586  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:f4:c9:2d in network mk-NoKubernetes-739327
	I1205 20:17:55.202144  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | unable to find current IP address of domain NoKubernetes-739327 in network mk-NoKubernetes-739327
	I1205 20:17:55.202170  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:55.202096  576527 retry.go:31] will retry after 2.625842394s: waiting for machine to come up
	I1205 20:17:57.830496  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:f4:c9:2d in network mk-NoKubernetes-739327
	I1205 20:17:57.831027  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | unable to find current IP address of domain NoKubernetes-739327 in network mk-NoKubernetes-739327
	I1205 20:17:57.831056  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:17:57.830952  576527 retry.go:31] will retry after 3.283441276s: waiting for machine to come up
	I1205 20:17:55.651384  576117 pod_ready.go:103] pod "etcd-pause-594992" in "kube-system" namespace has status "Ready":"False"
	I1205 20:17:58.150174  576117 pod_ready.go:103] pod "etcd-pause-594992" in "kube-system" namespace has status "Ready":"False"
	I1205 20:18:00.150481  576117 pod_ready.go:103] pod "etcd-pause-594992" in "kube-system" namespace has status "Ready":"False"
	I1205 20:18:00.650412  576117 pod_ready.go:93] pod "etcd-pause-594992" in "kube-system" namespace has status "Ready":"True"
	I1205 20:18:00.650440  576117 pod_ready.go:82] duration metric: took 9.507466804s for pod "etcd-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:00.650454  576117 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:00.656259  576117 pod_ready.go:93] pod "kube-apiserver-pause-594992" in "kube-system" namespace has status "Ready":"True"
	I1205 20:18:00.656308  576117 pod_ready.go:82] duration metric: took 5.844719ms for pod "kube-apiserver-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:00.656323  576117 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:00.662498  576117 pod_ready.go:93] pod "kube-controller-manager-pause-594992" in "kube-system" namespace has status "Ready":"True"
	I1205 20:18:00.662521  576117 pod_ready.go:82] duration metric: took 6.191357ms for pod "kube-controller-manager-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:00.662531  576117 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jxr6b" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:00.668070  576117 pod_ready.go:93] pod "kube-proxy-jxr6b" in "kube-system" namespace has status "Ready":"True"
	I1205 20:18:00.668092  576117 pod_ready.go:82] duration metric: took 5.55508ms for pod "kube-proxy-jxr6b" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:00.668100  576117 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:00.673570  576117 pod_ready.go:93] pod "kube-scheduler-pause-594992" in "kube-system" namespace has status "Ready":"True"
	I1205 20:18:00.673596  576117 pod_ready.go:82] duration metric: took 5.488053ms for pod "kube-scheduler-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:00.673605  576117 pod_ready.go:39] duration metric: took 9.546516314s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:18:00.673629  576117 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:18:00.687979  576117 ops.go:34] apiserver oom_adj: -16
	I1205 20:18:00.688029  576117 kubeadm.go:597] duration metric: took 29.712615502s to restartPrimaryControlPlane
	I1205 20:18:00.688043  576117 kubeadm.go:394] duration metric: took 29.890338567s to StartCluster
	I1205 20:18:00.688069  576117 settings.go:142] acquiring lock: {Name:mk53b9e6d652790a330d8f10370186624dd74692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:18:00.688169  576117 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:18:00.689119  576117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:18:00.689399  576117 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.246 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:18:00.689538  576117 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 20:18:00.689656  576117 config.go:182] Loaded profile config "pause-594992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:18:00.691335  576117 out.go:177] * Enabled addons: 
	I1205 20:18:00.691344  576117 out.go:177] * Verifying Kubernetes components...
	I1205 20:18:01.117806  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | domain NoKubernetes-739327 has defined MAC address 52:54:00:f4:c9:2d in network mk-NoKubernetes-739327
	I1205 20:18:01.118357  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | unable to find current IP address of domain NoKubernetes-739327 in network mk-NoKubernetes-739327
	I1205 20:18:01.118380  576500 main.go:141] libmachine: (NoKubernetes-739327) DBG | I1205 20:18:01.118260  576527 retry.go:31] will retry after 5.355367005s: waiting for machine to come up
	I1205 20:18:00.692685  576117 addons.go:510] duration metric: took 3.154985ms for enable addons: enabled=[]
	I1205 20:18:00.692781  576117 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:18:00.855958  576117 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:18:00.871761  576117 node_ready.go:35] waiting up to 6m0s for node "pause-594992" to be "Ready" ...
	I1205 20:18:00.875090  576117 node_ready.go:49] node "pause-594992" has status "Ready":"True"
	I1205 20:18:00.875119  576117 node_ready.go:38] duration metric: took 3.322664ms for node "pause-594992" to be "Ready" ...
	I1205 20:18:00.875132  576117 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:18:01.049693  576117 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x529d" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:01.448260  576117 pod_ready.go:93] pod "coredns-7c65d6cfc9-x529d" in "kube-system" namespace has status "Ready":"True"
	I1205 20:18:01.448323  576117 pod_ready.go:82] duration metric: took 398.600753ms for pod "coredns-7c65d6cfc9-x529d" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:01.448335  576117 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:01.848344  576117 pod_ready.go:93] pod "etcd-pause-594992" in "kube-system" namespace has status "Ready":"True"
	I1205 20:18:01.848380  576117 pod_ready.go:82] duration metric: took 400.03721ms for pod "etcd-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:01.848395  576117 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:02.248436  576117 pod_ready.go:93] pod "kube-apiserver-pause-594992" in "kube-system" namespace has status "Ready":"True"
	I1205 20:18:02.248470  576117 pod_ready.go:82] duration metric: took 400.066946ms for pod "kube-apiserver-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:02.248486  576117 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:02.647998  576117 pod_ready.go:93] pod "kube-controller-manager-pause-594992" in "kube-system" namespace has status "Ready":"True"
	I1205 20:18:02.648040  576117 pod_ready.go:82] duration metric: took 399.543946ms for pod "kube-controller-manager-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:02.648057  576117 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jxr6b" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:03.049235  576117 pod_ready.go:93] pod "kube-proxy-jxr6b" in "kube-system" namespace has status "Ready":"True"
	I1205 20:18:03.049259  576117 pod_ready.go:82] duration metric: took 401.194648ms for pod "kube-proxy-jxr6b" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:03.049269  576117 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:03.448952  576117 pod_ready.go:93] pod "kube-scheduler-pause-594992" in "kube-system" namespace has status "Ready":"True"
	I1205 20:18:03.448977  576117 pod_ready.go:82] duration metric: took 399.701483ms for pod "kube-scheduler-pause-594992" in "kube-system" namespace to be "Ready" ...
	I1205 20:18:03.448986  576117 pod_ready.go:39] duration metric: took 2.573841516s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:18:03.449003  576117 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:18:03.449054  576117 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:18:03.466422  576117 api_server.go:72] duration metric: took 2.776987485s to wait for apiserver process to appear ...
	I1205 20:18:03.466457  576117 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:18:03.466485  576117 api_server.go:253] Checking apiserver healthz at https://192.168.50.246:8443/healthz ...
	I1205 20:18:03.471057  576117 api_server.go:279] https://192.168.50.246:8443/healthz returned 200:
	ok
	I1205 20:18:03.471994  576117 api_server.go:141] control plane version: v1.31.2
	I1205 20:18:03.472016  576117 api_server.go:131] duration metric: took 5.551537ms to wait for apiserver health ...
	I1205 20:18:03.472024  576117 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:18:03.650665  576117 system_pods.go:59] 6 kube-system pods found
	I1205 20:18:03.650702  576117 system_pods.go:61] "coredns-7c65d6cfc9-x529d" [0c29f67b-db11-4444-a0ed-18a831e6a5fe] Running
	I1205 20:18:03.650710  576117 system_pods.go:61] "etcd-pause-594992" [bcc74ca9-37f0-4ab7-a3b9-a53b2d524754] Running
	I1205 20:18:03.650715  576117 system_pods.go:61] "kube-apiserver-pause-594992" [96a47b19-a5be-4ab2-89c9-296af79014cb] Running
	I1205 20:18:03.650719  576117 system_pods.go:61] "kube-controller-manager-pause-594992" [9f906a5e-85d3-4cf3-aeb0-8ad317ba6589] Running
	I1205 20:18:03.650729  576117 system_pods.go:61] "kube-proxy-jxr6b" [45d94ddc-c393-4083-807d-febc10b83bd5] Running
	I1205 20:18:03.650734  576117 system_pods.go:61] "kube-scheduler-pause-594992" [a9cea55a-87f2-4f79-96d3-318229726ded] Running
	I1205 20:18:03.650744  576117 system_pods.go:74] duration metric: took 178.711793ms to wait for pod list to return data ...
	I1205 20:18:03.650758  576117 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:18:03.847033  576117 default_sa.go:45] found service account: "default"
	I1205 20:18:03.847067  576117 default_sa.go:55] duration metric: took 196.299104ms for default service account to be created ...
	I1205 20:18:03.847079  576117 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:18:04.049711  576117 system_pods.go:86] 6 kube-system pods found
	I1205 20:18:04.049751  576117 system_pods.go:89] "coredns-7c65d6cfc9-x529d" [0c29f67b-db11-4444-a0ed-18a831e6a5fe] Running
	I1205 20:18:04.049762  576117 system_pods.go:89] "etcd-pause-594992" [bcc74ca9-37f0-4ab7-a3b9-a53b2d524754] Running
	I1205 20:18:04.049768  576117 system_pods.go:89] "kube-apiserver-pause-594992" [96a47b19-a5be-4ab2-89c9-296af79014cb] Running
	I1205 20:18:04.049775  576117 system_pods.go:89] "kube-controller-manager-pause-594992" [9f906a5e-85d3-4cf3-aeb0-8ad317ba6589] Running
	I1205 20:18:04.049780  576117 system_pods.go:89] "kube-proxy-jxr6b" [45d94ddc-c393-4083-807d-febc10b83bd5] Running
	I1205 20:18:04.049785  576117 system_pods.go:89] "kube-scheduler-pause-594992" [a9cea55a-87f2-4f79-96d3-318229726ded] Running
	I1205 20:18:04.049795  576117 system_pods.go:126] duration metric: took 202.70981ms to wait for k8s-apps to be running ...
	I1205 20:18:04.049804  576117 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:18:04.049862  576117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:18:04.069009  576117 system_svc.go:56] duration metric: took 19.190528ms WaitForService to wait for kubelet
	I1205 20:18:04.069045  576117 kubeadm.go:582] duration metric: took 3.379619872s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:18:04.069103  576117 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:18:04.247603  576117 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:18:04.247632  576117 node_conditions.go:123] node cpu capacity is 2
	I1205 20:18:04.247646  576117 node_conditions.go:105] duration metric: took 178.537968ms to run NodePressure ...
	I1205 20:18:04.247660  576117 start.go:241] waiting for startup goroutines ...
	I1205 20:18:04.247669  576117 start.go:246] waiting for cluster config update ...
	I1205 20:18:04.247679  576117 start.go:255] writing updated cluster config ...
	I1205 20:18:04.248548  576117 ssh_runner.go:195] Run: rm -f paused
	I1205 20:18:04.300672  576117 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 20:18:04.302750  576117 out.go:177] * Done! kubectl is now configured to use "pause-594992" cluster and "default" namespace by default
	I1205 20:18:03.695066  575390 kubeadm.go:310] [apiclient] All control plane components are healthy after 8.503739 seconds
	I1205 20:18:03.695249  575390 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:18:03.711256  575390 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:18:04.237345  575390 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:18:04.237643  575390 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-617890 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:18:04.749660  575390 kubeadm.go:310] [bootstrap-token] Using token: fpw9ia.pfcbg8tp5msqk67q
	I1205 20:18:04.751194  575390 out.go:235]   - Configuring RBAC rules ...
	I1205 20:18:04.751347  575390 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:18:04.758654  575390 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:18:04.767751  575390 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:18:04.775934  575390 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:18:04.778999  575390 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:18:04.781931  575390 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:18:04.793456  575390 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:18:05.021855  575390 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 20:18:05.171150  575390 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 20:18:05.172235  575390 kubeadm.go:310] 
	I1205 20:18:05.172355  575390 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 20:18:05.172373  575390 kubeadm.go:310] 
	I1205 20:18:05.172497  575390 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 20:18:05.172531  575390 kubeadm.go:310] 
	I1205 20:18:05.172590  575390 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 20:18:05.172671  575390 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:18:05.172717  575390 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:18:05.172724  575390 kubeadm.go:310] 
	I1205 20:18:05.172768  575390 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 20:18:05.172775  575390 kubeadm.go:310] 
	I1205 20:18:05.172819  575390 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:18:05.172829  575390 kubeadm.go:310] 
	I1205 20:18:05.172919  575390 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 20:18:05.173023  575390 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:18:05.173152  575390 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:18:05.173165  575390 kubeadm.go:310] 
	I1205 20:18:05.173318  575390 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:18:05.173438  575390 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 20:18:05.173448  575390 kubeadm.go:310] 
	I1205 20:18:05.173573  575390 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fpw9ia.pfcbg8tp5msqk67q \
	I1205 20:18:05.173722  575390 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 \
	I1205 20:18:05.173754  575390 kubeadm.go:310] 	--control-plane 
	I1205 20:18:05.173761  575390 kubeadm.go:310] 
	I1205 20:18:05.173873  575390 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:18:05.173883  575390 kubeadm.go:310] 
	I1205 20:18:05.174015  575390 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fpw9ia.pfcbg8tp5msqk67q \
	I1205 20:18:05.174185  575390 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 
	I1205 20:18:05.177812  575390 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:18:05.177847  575390 cni.go:84] Creating CNI manager for ""
	I1205 20:18:05.177857  575390 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:18:05.179477  575390 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:18:05.181277  575390 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:18:05.208436  575390 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:18:05.241099  575390 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:18:05.241171  575390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:18:05.241194  575390 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-617890 minikube.k8s.io/updated_at=2024_12_05T20_18_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=running-upgrade-617890 minikube.k8s.io/primary=true
	I1205 20:18:05.580975  575390 kubeadm.go:1113] duration metric: took 339.863915ms to wait for elevateKubeSystemPrivileges
	I1205 20:18:05.590665  575390 ops.go:34] apiserver oom_adj: -16
	W1205 20:18:05.590771  575390 kubeadm.go:287] apiserver tunnel failed: apiserver port not set
	I1205 20:18:05.590799  575390 kubeadm.go:394] duration metric: took 52.784202278s to StartCluster
	I1205 20:18:05.590830  575390 settings.go:142] acquiring lock: {Name:mk53b9e6d652790a330d8f10370186624dd74692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:18:05.590924  575390 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:18:05.593681  575390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:18:05.594569  575390 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.205 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:18:05.594691  575390 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 20:18:05.594868  575390 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-617890"
	I1205 20:18:05.594893  575390 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-617890"
	W1205 20:18:05.594902  575390 addons.go:243] addon storage-provisioner should already be in state true
	I1205 20:18:05.594813  575390 config.go:182] Loaded profile config "running-upgrade-617890": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1205 20:18:05.594942  575390 host.go:66] Checking if "running-upgrade-617890" exists ...
	I1205 20:18:05.594960  575390 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-617890"
	I1205 20:18:05.594989  575390 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-617890"
	I1205 20:18:05.595354  575390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:18:05.595397  575390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:18:05.595498  575390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:18:05.595544  575390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:18:05.596046  575390 out.go:177] * Verifying Kubernetes components...
	I1205 20:18:05.597728  575390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:18:05.612553  575390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45335
	I1205 20:18:05.613097  575390 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:18:05.613481  575390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38029
	I1205 20:18:05.613673  575390 main.go:141] libmachine: Using API Version  1
	I1205 20:18:05.613704  575390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:18:05.613964  575390 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:18:05.614091  575390 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:18:05.614413  575390 main.go:141] libmachine: (running-upgrade-617890) Calling .GetState
	I1205 20:18:05.614553  575390 main.go:141] libmachine: Using API Version  1
	I1205 20:18:05.614578  575390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:18:05.614975  575390 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:18:05.615601  575390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:18:05.615655  575390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:18:05.617184  575390 kapi.go:59] client config for running-upgrade-617890: &rest.Config{Host:"https://192.168.72.205:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/profiles/running-upgrade-617890/client.crt", KeyFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/profiles/running-upgrade-617890/client.key", CAFile:"/home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 20:18:05.617515  575390 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-617890"
	W1205 20:18:05.617533  575390 addons.go:243] addon default-storageclass should already be in state true
	I1205 20:18:05.617563  575390 host.go:66] Checking if "running-upgrade-617890" exists ...
	I1205 20:18:05.617845  575390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:18:05.617878  575390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:18:05.633459  575390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34489
	I1205 20:18:05.634164  575390 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:18:05.634975  575390 main.go:141] libmachine: Using API Version  1
	I1205 20:18:05.635007  575390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:18:05.635459  575390 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:18:05.636186  575390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:18:05.636239  575390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:18:05.636858  575390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36873
	I1205 20:18:05.637278  575390 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:18:05.637958  575390 main.go:141] libmachine: Using API Version  1
	I1205 20:18:05.637983  575390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:18:05.638400  575390 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:18:05.638667  575390 main.go:141] libmachine: (running-upgrade-617890) Calling .GetState
	I1205 20:18:05.640830  575390 main.go:141] libmachine: (running-upgrade-617890) Calling .DriverName
	I1205 20:18:05.642863  575390 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:18:05.644225  575390 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:18:05.644243  575390 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:18:05.644263  575390 main.go:141] libmachine: (running-upgrade-617890) Calling .GetSSHHostname
	I1205 20:18:05.647914  575390 main.go:141] libmachine: (running-upgrade-617890) DBG | domain running-upgrade-617890 has defined MAC address 52:54:00:e5:c4:14 in network mk-running-upgrade-617890
	I1205 20:18:05.648566  575390 main.go:141] libmachine: (running-upgrade-617890) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:c4:14", ip: ""} in network mk-running-upgrade-617890: {Iface:virbr4 ExpiryTime:2024-12-05 21:15:45 +0000 UTC Type:0 Mac:52:54:00:e5:c4:14 Iaid: IPaddr:192.168.72.205 Prefix:24 Hostname:running-upgrade-617890 Clientid:01:52:54:00:e5:c4:14}
	I1205 20:18:05.648594  575390 main.go:141] libmachine: (running-upgrade-617890) DBG | domain running-upgrade-617890 has defined IP address 192.168.72.205 and MAC address 52:54:00:e5:c4:14 in network mk-running-upgrade-617890
	I1205 20:18:05.648924  575390 main.go:141] libmachine: (running-upgrade-617890) Calling .GetSSHPort
	I1205 20:18:05.649127  575390 main.go:141] libmachine: (running-upgrade-617890) Calling .GetSSHKeyPath
	I1205 20:18:05.649301  575390 main.go:141] libmachine: (running-upgrade-617890) Calling .GetSSHUsername
	I1205 20:18:05.649742  575390 sshutil.go:53] new ssh client: &{IP:192.168.72.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/running-upgrade-617890/id_rsa Username:docker}
	I1205 20:18:05.657614  575390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39799
	I1205 20:18:05.658022  575390 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:18:05.658459  575390 main.go:141] libmachine: Using API Version  1
	I1205 20:18:05.658474  575390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:18:05.658764  575390 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:18:05.659072  575390 main.go:141] libmachine: (running-upgrade-617890) Calling .GetState
	I1205 20:18:05.660768  575390 main.go:141] libmachine: (running-upgrade-617890) Calling .DriverName
	I1205 20:18:05.660979  575390 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:18:05.660996  575390 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:18:05.661020  575390 main.go:141] libmachine: (running-upgrade-617890) Calling .GetSSHHostname
	I1205 20:18:05.664599  575390 main.go:141] libmachine: (running-upgrade-617890) DBG | domain running-upgrade-617890 has defined MAC address 52:54:00:e5:c4:14 in network mk-running-upgrade-617890
	I1205 20:18:05.665148  575390 main.go:141] libmachine: (running-upgrade-617890) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:c4:14", ip: ""} in network mk-running-upgrade-617890: {Iface:virbr4 ExpiryTime:2024-12-05 21:15:45 +0000 UTC Type:0 Mac:52:54:00:e5:c4:14 Iaid: IPaddr:192.168.72.205 Prefix:24 Hostname:running-upgrade-617890 Clientid:01:52:54:00:e5:c4:14}
	I1205 20:18:05.665173  575390 main.go:141] libmachine: (running-upgrade-617890) DBG | domain running-upgrade-617890 has defined IP address 192.168.72.205 and MAC address 52:54:00:e5:c4:14 in network mk-running-upgrade-617890
	I1205 20:18:05.665227  575390 main.go:141] libmachine: (running-upgrade-617890) Calling .GetSSHPort
	I1205 20:18:05.665417  575390 main.go:141] libmachine: (running-upgrade-617890) Calling .GetSSHKeyPath
	I1205 20:18:05.665535  575390 main.go:141] libmachine: (running-upgrade-617890) Calling .GetSSHUsername
	I1205 20:18:05.665650  575390 sshutil.go:53] new ssh client: &{IP:192.168.72.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/running-upgrade-617890/id_rsa Username:docker}
	I1205 20:18:05.764491  575390 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:18:05.787770  575390 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:18:05.787892  575390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:18:05.804003  575390 api_server.go:72] duration metric: took 209.385526ms to wait for apiserver process to appear ...
	I1205 20:18:05.804027  575390 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:18:05.804054  575390 api_server.go:253] Checking apiserver healthz at https://192.168.72.205:8443/healthz ...
	I1205 20:18:05.810551  575390 api_server.go:279] https://192.168.72.205:8443/healthz returned 200:
	ok
	I1205 20:18:05.819440  575390 api_server.go:141] control plane version: v1.24.1
	I1205 20:18:05.819479  575390 api_server.go:131] duration metric: took 15.443584ms to wait for apiserver health ...
	I1205 20:18:05.819492  575390 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:18:05.824745  575390 system_pods.go:59] 4 kube-system pods found
	I1205 20:18:05.824776  575390 system_pods.go:61] "etcd-running-upgrade-617890" [55fed725-c9d6-4109-9f5f-7b808eb67a2c] Pending
	I1205 20:18:05.824781  575390 system_pods.go:61] "kube-apiserver-running-upgrade-617890" [d2f6eecd-8a29-4435-8f38-fa2ed63e2d3f] Pending
	I1205 20:18:05.824785  575390 system_pods.go:61] "kube-controller-manager-running-upgrade-617890" [7e06395a-6553-4955-aa83-2c8d25ba1bbb] Pending
	I1205 20:18:05.824789  575390 system_pods.go:61] "kube-scheduler-running-upgrade-617890" [20f3a4fd-6578-4b7b-b458-ed867737f78b] Pending
	I1205 20:18:05.824795  575390 system_pods.go:74] duration metric: took 5.296455ms to wait for pod list to return data ...
	I1205 20:18:05.824807  575390 kubeadm.go:582] duration metric: took 230.19671ms to wait for: map[apiserver:true system_pods:true]
	I1205 20:18:05.824821  575390 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:18:05.828533  575390 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:18:05.828568  575390 node_conditions.go:123] node cpu capacity is 2
	I1205 20:18:05.828608  575390 node_conditions.go:105] duration metric: took 3.78092ms to run NodePressure ...
	I1205 20:18:05.828627  575390 start.go:241] waiting for startup goroutines ...
	I1205 20:18:05.849814  575390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:18:05.907428  575390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:18:06.696580  575390 main.go:141] libmachine: Making call to close driver server
	I1205 20:18:06.696614  575390 main.go:141] libmachine: (running-upgrade-617890) Calling .Close
	I1205 20:18:06.696930  575390 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:18:06.696949  575390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:18:06.696959  575390 main.go:141] libmachine: Making call to close driver server
	I1205 20:18:06.696969  575390 main.go:141] libmachine: (running-upgrade-617890) Calling .Close
	I1205 20:18:06.697253  575390 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:18:06.697273  575390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:18:06.708499  575390 main.go:141] libmachine: Making call to close driver server
	I1205 20:18:06.708523  575390 main.go:141] libmachine: (running-upgrade-617890) Calling .Close
	I1205 20:18:06.708768  575390 main.go:141] libmachine: (running-upgrade-617890) DBG | Closing plugin on server side
	I1205 20:18:06.708805  575390 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:18:06.708813  575390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:18:06.768901  575390 main.go:141] libmachine: Making call to close driver server
	I1205 20:18:06.768932  575390 main.go:141] libmachine: (running-upgrade-617890) Calling .Close
	I1205 20:18:06.769306  575390 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:18:06.769324  575390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:18:06.769334  575390 main.go:141] libmachine: Making call to close driver server
	I1205 20:18:06.769343  575390 main.go:141] libmachine: (running-upgrade-617890) Calling .Close
	I1205 20:18:06.769622  575390 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:18:06.769641  575390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:18:06.769640  575390 main.go:141] libmachine: (running-upgrade-617890) DBG | Closing plugin on server side
	I1205 20:18:06.771202  575390 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1205 20:18:06.772744  575390 addons.go:510] duration metric: took 1.178048235s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1205 20:18:06.772793  575390 start.go:246] waiting for cluster config update ...
	I1205 20:18:06.772808  575390 start.go:255] writing updated cluster config ...
	I1205 20:18:06.773108  575390 ssh_runner.go:195] Run: rm -f paused
	I1205 20:18:06.833132  575390 start.go:600] kubectl: 1.31.3, cluster: 1.24.1 (minor skew: 7)
	I1205 20:18:06.834642  575390 out.go:201] 
	W1205 20:18:06.836131  575390 out.go:270] ! /usr/local/bin/kubectl is version 1.31.3, which may have incompatibilities with Kubernetes 1.24.1.
	I1205 20:18:06.837541  575390 out.go:177]   - Want kubectl v1.24.1? Try 'minikube kubectl -- get pods -A'
	I1205 20:18:06.839104  575390 out.go:177] * Done! kubectl is now configured to use "running-upgrade-617890" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 20:18:07 pause-594992 crio[2093]: time="2024-12-05 20:18:07.237069063Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733429887237044827,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=940f5c3e-319c-4756-afec-38e86a3770e4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:18:07 pause-594992 crio[2093]: time="2024-12-05 20:18:07.237996316Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d3463961-05e4-4221-9a59-9b818a96211d name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:18:07 pause-594992 crio[2093]: time="2024-12-05 20:18:07.238165967Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d3463961-05e4-4221-9a59-9b818a96211d name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:18:07 pause-594992 crio[2093]: time="2024-12-05 20:18:07.238412538Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f0ab945fce0cfc411e685e34d69b7861c565af3b31ef78157c39cb1a4526b3a,PodSandboxId:783af0822a6e8651fcbc696271fbe1a8e36cf3df9c3b3fec49d1e35fc7cc491a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733429869873872676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d94ddc-c393-4083-807d-febc10b83bd5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83504d06bbf4f9abdf00ce5d5e1cc316692b206056054d6f250a9ae65843afbe,PodSandboxId:fa8847f02f6634c6d874721893073b58ae3a82b99d467b2380fdf442ea79fa75,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733429869859822504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x529d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29f67b-db11-4444-a0ed-18a831e6a5fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00a448fe5de6cb26361e360c5526cc48cb624e6557d23a1b481db22406456fec,PodSandboxId:ad44895fc7f80192e4109784a2d134e90b8b8ae8a3d1c70d2ee26a18afb370c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733429866245822008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926cc820a9
957a251337825f993b8655,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f404bc68434b2921a7d68bd1fa4bc5aa9a8bd64bfaa3476c53afb80d203fe4c8,PodSandboxId:df80148bdf512ba2e6a409909fa0e63aa64d82e8a5413b4b305a8dc5a9d357c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733429866275969321,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f087dc577c0adbc17c33185aafd
a754,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59948e40715f67533ef4cf36b4f0b69a232f601aafabaeb14485b8e28c2e41a,PodSandboxId:51ea6c16af67bba544ed4c6698dc543ee976dbba4cdcacee81956d1adf9fe3a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733429866250239749,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0847a0a21829616614
d0fdf19abceba,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ff734496e3eebdee5a7cf70c8fac85080bc8736f14b86b7157aa102294a02e7,PodSandboxId:ad92e06f04c03c444f8c5bdf3bf5f12d00476530340fdcf9c7b8f9afda625ef1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733429866215410030,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 537fa6fa5d15b54f9f387b8c108ee3ae,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:495c73deed76c0d6dbdfd63738005f5bdcb035585abe2d2bf533e9fc5990d163,PodSandboxId:fa8847f02f6634c6d874721893073b58ae3a82b99d467b2380fdf442ea79fa75,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733429850654296819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x529d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29f67b-db11-4444-a0ed-18a831e6a5fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d4b65f2c05d5039cdee981da2fec37671762524ea220af215394d893a9d090e,PodSandboxId:df80148bdf512ba2e6a409909fa0e63aa64d82e8a5413b4b305a8dc5a9d357c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1733429849869245309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kuber
netes.pod.name: kube-scheduler-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f087dc577c0adbc17c33185aafda754,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03439c2516853cca606e7485a51dbd0b7d6d1c2eeb7f602460f4f7399f17ef0b,PodSandboxId:783af0822a6e8651fcbc696271fbe1a8e36cf3df9c3b3fec49d1e35fc7cc491a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1733429849778026060,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-
jxr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d94ddc-c393-4083-807d-febc10b83bd5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:520bd43d560d042506a61ee26beabaae5115f81728340ced635de2657d5fea4f,PodSandboxId:ad92e06f04c03c444f8c5bdf3bf5f12d00476530340fdcf9c7b8f9afda625ef1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1733429849769351069,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-594992,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 537fa6fa5d15b54f9f387b8c108ee3ae,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b496d56cafd2d6a7afe7553c461e588c295e2f6ff2764a4a06e194e1d20399cb,PodSandboxId:51ea6c16af67bba544ed4c6698dc543ee976dbba4cdcacee81956d1adf9fe3a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733429849728807733,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-594992,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: f0847a0a21829616614d0fdf19abceba,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad4a3ea6f36235b3d837268ecdefb24951435a4edf008b112588ba3f5f83916,PodSandboxId:ad44895fc7f80192e4109784a2d134e90b8b8ae8a3d1c70d2ee26a18afb370c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733429849660169225,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 926cc820a9957a251337825f993b8655,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d3463961-05e4-4221-9a59-9b818a96211d name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:18:07 pause-594992 crio[2093]: time="2024-12-05 20:18:07.287564991Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9f2798c8-eae4-403c-aa63-ec3988df9f2a name=/runtime.v1.RuntimeService/Version
	Dec 05 20:18:07 pause-594992 crio[2093]: time="2024-12-05 20:18:07.287670427Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9f2798c8-eae4-403c-aa63-ec3988df9f2a name=/runtime.v1.RuntimeService/Version
	Dec 05 20:18:07 pause-594992 crio[2093]: time="2024-12-05 20:18:07.289538006Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b21f87b5-0252-4cdd-8316-617d44b395b4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:18:07 pause-594992 crio[2093]: time="2024-12-05 20:18:07.290231964Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733429887290193216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b21f87b5-0252-4cdd-8316-617d44b395b4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:18:07 pause-594992 crio[2093]: time="2024-12-05 20:18:07.291835979Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c918ee8-74ec-427e-9d7e-c9e1b4f4344a name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:18:07 pause-594992 crio[2093]: time="2024-12-05 20:18:07.291920015Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c918ee8-74ec-427e-9d7e-c9e1b4f4344a name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:18:07 pause-594992 crio[2093]: time="2024-12-05 20:18:07.292533368Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f0ab945fce0cfc411e685e34d69b7861c565af3b31ef78157c39cb1a4526b3a,PodSandboxId:783af0822a6e8651fcbc696271fbe1a8e36cf3df9c3b3fec49d1e35fc7cc491a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733429869873872676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d94ddc-c393-4083-807d-febc10b83bd5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83504d06bbf4f9abdf00ce5d5e1cc316692b206056054d6f250a9ae65843afbe,PodSandboxId:fa8847f02f6634c6d874721893073b58ae3a82b99d467b2380fdf442ea79fa75,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733429869859822504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x529d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29f67b-db11-4444-a0ed-18a831e6a5fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00a448fe5de6cb26361e360c5526cc48cb624e6557d23a1b481db22406456fec,PodSandboxId:ad44895fc7f80192e4109784a2d134e90b8b8ae8a3d1c70d2ee26a18afb370c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733429866245822008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926cc820a9
957a251337825f993b8655,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f404bc68434b2921a7d68bd1fa4bc5aa9a8bd64bfaa3476c53afb80d203fe4c8,PodSandboxId:df80148bdf512ba2e6a409909fa0e63aa64d82e8a5413b4b305a8dc5a9d357c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733429866275969321,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f087dc577c0adbc17c33185aafd
a754,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59948e40715f67533ef4cf36b4f0b69a232f601aafabaeb14485b8e28c2e41a,PodSandboxId:51ea6c16af67bba544ed4c6698dc543ee976dbba4cdcacee81956d1adf9fe3a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733429866250239749,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0847a0a21829616614
d0fdf19abceba,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ff734496e3eebdee5a7cf70c8fac85080bc8736f14b86b7157aa102294a02e7,PodSandboxId:ad92e06f04c03c444f8c5bdf3bf5f12d00476530340fdcf9c7b8f9afda625ef1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733429866215410030,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 537fa6fa5d15b54f9f387b8c108ee3ae,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:495c73deed76c0d6dbdfd63738005f5bdcb035585abe2d2bf533e9fc5990d163,PodSandboxId:fa8847f02f6634c6d874721893073b58ae3a82b99d467b2380fdf442ea79fa75,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733429850654296819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x529d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29f67b-db11-4444-a0ed-18a831e6a5fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d4b65f2c05d5039cdee981da2fec37671762524ea220af215394d893a9d090e,PodSandboxId:df80148bdf512ba2e6a409909fa0e63aa64d82e8a5413b4b305a8dc5a9d357c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1733429849869245309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kuber
netes.pod.name: kube-scheduler-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f087dc577c0adbc17c33185aafda754,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03439c2516853cca606e7485a51dbd0b7d6d1c2eeb7f602460f4f7399f17ef0b,PodSandboxId:783af0822a6e8651fcbc696271fbe1a8e36cf3df9c3b3fec49d1e35fc7cc491a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1733429849778026060,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-
jxr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d94ddc-c393-4083-807d-febc10b83bd5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:520bd43d560d042506a61ee26beabaae5115f81728340ced635de2657d5fea4f,PodSandboxId:ad92e06f04c03c444f8c5bdf3bf5f12d00476530340fdcf9c7b8f9afda625ef1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1733429849769351069,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-594992,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 537fa6fa5d15b54f9f387b8c108ee3ae,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b496d56cafd2d6a7afe7553c461e588c295e2f6ff2764a4a06e194e1d20399cb,PodSandboxId:51ea6c16af67bba544ed4c6698dc543ee976dbba4cdcacee81956d1adf9fe3a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733429849728807733,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-594992,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: f0847a0a21829616614d0fdf19abceba,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad4a3ea6f36235b3d837268ecdefb24951435a4edf008b112588ba3f5f83916,PodSandboxId:ad44895fc7f80192e4109784a2d134e90b8b8ae8a3d1c70d2ee26a18afb370c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733429849660169225,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 926cc820a9957a251337825f993b8655,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7c918ee8-74ec-427e-9d7e-c9e1b4f4344a name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:18:07 pause-594992 crio[2093]: time="2024-12-05 20:18:07.342261008Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=09521cfd-a0fc-4ae1-b6c7-966b43aa700a name=/runtime.v1.RuntimeService/Version
	Dec 05 20:18:07 pause-594992 crio[2093]: time="2024-12-05 20:18:07.342377100Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=09521cfd-a0fc-4ae1-b6c7-966b43aa700a name=/runtime.v1.RuntimeService/Version
	Dec 05 20:18:07 pause-594992 crio[2093]: time="2024-12-05 20:18:07.344038975Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=defda4ca-9e87-4950-9b1d-b08de9066659 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:18:07 pause-594992 crio[2093]: time="2024-12-05 20:18:07.344650346Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733429887344618137,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=defda4ca-9e87-4950-9b1d-b08de9066659 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:18:07 pause-594992 crio[2093]: time="2024-12-05 20:18:07.345660556Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b94095d3-fb5b-4be1-a59f-9bb518ac3b57 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:18:07 pause-594992 crio[2093]: time="2024-12-05 20:18:07.345739909Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b94095d3-fb5b-4be1-a59f-9bb518ac3b57 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:18:07 pause-594992 crio[2093]: time="2024-12-05 20:18:07.346301482Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f0ab945fce0cfc411e685e34d69b7861c565af3b31ef78157c39cb1a4526b3a,PodSandboxId:783af0822a6e8651fcbc696271fbe1a8e36cf3df9c3b3fec49d1e35fc7cc491a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733429869873872676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d94ddc-c393-4083-807d-febc10b83bd5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83504d06bbf4f9abdf00ce5d5e1cc316692b206056054d6f250a9ae65843afbe,PodSandboxId:fa8847f02f6634c6d874721893073b58ae3a82b99d467b2380fdf442ea79fa75,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733429869859822504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x529d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29f67b-db11-4444-a0ed-18a831e6a5fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00a448fe5de6cb26361e360c5526cc48cb624e6557d23a1b481db22406456fec,PodSandboxId:ad44895fc7f80192e4109784a2d134e90b8b8ae8a3d1c70d2ee26a18afb370c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733429866245822008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926cc820a9
957a251337825f993b8655,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f404bc68434b2921a7d68bd1fa4bc5aa9a8bd64bfaa3476c53afb80d203fe4c8,PodSandboxId:df80148bdf512ba2e6a409909fa0e63aa64d82e8a5413b4b305a8dc5a9d357c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733429866275969321,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f087dc577c0adbc17c33185aafd
a754,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59948e40715f67533ef4cf36b4f0b69a232f601aafabaeb14485b8e28c2e41a,PodSandboxId:51ea6c16af67bba544ed4c6698dc543ee976dbba4cdcacee81956d1adf9fe3a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733429866250239749,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0847a0a21829616614
d0fdf19abceba,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ff734496e3eebdee5a7cf70c8fac85080bc8736f14b86b7157aa102294a02e7,PodSandboxId:ad92e06f04c03c444f8c5bdf3bf5f12d00476530340fdcf9c7b8f9afda625ef1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733429866215410030,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 537fa6fa5d15b54f9f387b8c108ee3ae,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:495c73deed76c0d6dbdfd63738005f5bdcb035585abe2d2bf533e9fc5990d163,PodSandboxId:fa8847f02f6634c6d874721893073b58ae3a82b99d467b2380fdf442ea79fa75,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733429850654296819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x529d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29f67b-db11-4444-a0ed-18a831e6a5fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d4b65f2c05d5039cdee981da2fec37671762524ea220af215394d893a9d090e,PodSandboxId:df80148bdf512ba2e6a409909fa0e63aa64d82e8a5413b4b305a8dc5a9d357c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1733429849869245309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kuber
netes.pod.name: kube-scheduler-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f087dc577c0adbc17c33185aafda754,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03439c2516853cca606e7485a51dbd0b7d6d1c2eeb7f602460f4f7399f17ef0b,PodSandboxId:783af0822a6e8651fcbc696271fbe1a8e36cf3df9c3b3fec49d1e35fc7cc491a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1733429849778026060,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-
jxr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d94ddc-c393-4083-807d-febc10b83bd5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:520bd43d560d042506a61ee26beabaae5115f81728340ced635de2657d5fea4f,PodSandboxId:ad92e06f04c03c444f8c5bdf3bf5f12d00476530340fdcf9c7b8f9afda625ef1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1733429849769351069,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-594992,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 537fa6fa5d15b54f9f387b8c108ee3ae,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b496d56cafd2d6a7afe7553c461e588c295e2f6ff2764a4a06e194e1d20399cb,PodSandboxId:51ea6c16af67bba544ed4c6698dc543ee976dbba4cdcacee81956d1adf9fe3a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733429849728807733,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-594992,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: f0847a0a21829616614d0fdf19abceba,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad4a3ea6f36235b3d837268ecdefb24951435a4edf008b112588ba3f5f83916,PodSandboxId:ad44895fc7f80192e4109784a2d134e90b8b8ae8a3d1c70d2ee26a18afb370c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733429849660169225,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 926cc820a9957a251337825f993b8655,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b94095d3-fb5b-4be1-a59f-9bb518ac3b57 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:18:07 pause-594992 crio[2093]: time="2024-12-05 20:18:07.396624323Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=136a8194-7929-4a76-9d5e-73f5b14a86e3 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:18:07 pause-594992 crio[2093]: time="2024-12-05 20:18:07.396696657Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=136a8194-7929-4a76-9d5e-73f5b14a86e3 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:18:07 pause-594992 crio[2093]: time="2024-12-05 20:18:07.399382256Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0ddf6a6f-8964-4d7a-9d78-3c05de2c4828 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:18:07 pause-594992 crio[2093]: time="2024-12-05 20:18:07.400382310Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733429887400352061,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0ddf6a6f-8964-4d7a-9d78-3c05de2c4828 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:18:07 pause-594992 crio[2093]: time="2024-12-05 20:18:07.401263086Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6c2ea7f-ca2a-42c3-b1f0-d58e6d8751f7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:18:07 pause-594992 crio[2093]: time="2024-12-05 20:18:07.401340331Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6c2ea7f-ca2a-42c3-b1f0-d58e6d8751f7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:18:07 pause-594992 crio[2093]: time="2024-12-05 20:18:07.401633833Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f0ab945fce0cfc411e685e34d69b7861c565af3b31ef78157c39cb1a4526b3a,PodSandboxId:783af0822a6e8651fcbc696271fbe1a8e36cf3df9c3b3fec49d1e35fc7cc491a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733429869873872676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d94ddc-c393-4083-807d-febc10b83bd5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83504d06bbf4f9abdf00ce5d5e1cc316692b206056054d6f250a9ae65843afbe,PodSandboxId:fa8847f02f6634c6d874721893073b58ae3a82b99d467b2380fdf442ea79fa75,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733429869859822504,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x529d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29f67b-db11-4444-a0ed-18a831e6a5fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00a448fe5de6cb26361e360c5526cc48cb624e6557d23a1b481db22406456fec,PodSandboxId:ad44895fc7f80192e4109784a2d134e90b8b8ae8a3d1c70d2ee26a18afb370c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733429866245822008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 926cc820a9
957a251337825f993b8655,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f404bc68434b2921a7d68bd1fa4bc5aa9a8bd64bfaa3476c53afb80d203fe4c8,PodSandboxId:df80148bdf512ba2e6a409909fa0e63aa64d82e8a5413b4b305a8dc5a9d357c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733429866275969321,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f087dc577c0adbc17c33185aafd
a754,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59948e40715f67533ef4cf36b4f0b69a232f601aafabaeb14485b8e28c2e41a,PodSandboxId:51ea6c16af67bba544ed4c6698dc543ee976dbba4cdcacee81956d1adf9fe3a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733429866250239749,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0847a0a21829616614
d0fdf19abceba,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ff734496e3eebdee5a7cf70c8fac85080bc8736f14b86b7157aa102294a02e7,PodSandboxId:ad92e06f04c03c444f8c5bdf3bf5f12d00476530340fdcf9c7b8f9afda625ef1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733429866215410030,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 537fa6fa5d15b54f9f387b8c108ee3ae,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:495c73deed76c0d6dbdfd63738005f5bdcb035585abe2d2bf533e9fc5990d163,PodSandboxId:fa8847f02f6634c6d874721893073b58ae3a82b99d467b2380fdf442ea79fa75,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733429850654296819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-x529d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c29f67b-db11-4444-a0ed-18a831e6a5fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d4b65f2c05d5039cdee981da2fec37671762524ea220af215394d893a9d090e,PodSandboxId:df80148bdf512ba2e6a409909fa0e63aa64d82e8a5413b4b305a8dc5a9d357c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1733429849869245309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kuber
netes.pod.name: kube-scheduler-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f087dc577c0adbc17c33185aafda754,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03439c2516853cca606e7485a51dbd0b7d6d1c2eeb7f602460f4f7399f17ef0b,PodSandboxId:783af0822a6e8651fcbc696271fbe1a8e36cf3df9c3b3fec49d1e35fc7cc491a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1733429849778026060,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-
jxr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d94ddc-c393-4083-807d-febc10b83bd5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:520bd43d560d042506a61ee26beabaae5115f81728340ced635de2657d5fea4f,PodSandboxId:ad92e06f04c03c444f8c5bdf3bf5f12d00476530340fdcf9c7b8f9afda625ef1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1733429849769351069,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-594992,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 537fa6fa5d15b54f9f387b8c108ee3ae,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b496d56cafd2d6a7afe7553c461e588c295e2f6ff2764a4a06e194e1d20399cb,PodSandboxId:51ea6c16af67bba544ed4c6698dc543ee976dbba4cdcacee81956d1adf9fe3a4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733429849728807733,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-594992,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: f0847a0a21829616614d0fdf19abceba,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad4a3ea6f36235b3d837268ecdefb24951435a4edf008b112588ba3f5f83916,PodSandboxId:ad44895fc7f80192e4109784a2d134e90b8b8ae8a3d1c70d2ee26a18afb370c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733429849660169225,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-594992,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 926cc820a9957a251337825f993b8655,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b6c2ea7f-ca2a-42c3-b1f0-d58e6d8751f7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4f0ab945fce0c       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   17 seconds ago      Running             kube-proxy                2                   783af0822a6e8       kube-proxy-jxr6b
	83504d06bbf4f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   17 seconds ago      Running             coredns                   2                   fa8847f02f663       coredns-7c65d6cfc9-x529d
	f404bc68434b2       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   21 seconds ago      Running             kube-scheduler            2                   df80148bdf512       kube-scheduler-pause-594992
	d59948e40715f       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   21 seconds ago      Running             kube-controller-manager   2                   51ea6c16af67b       kube-controller-manager-pause-594992
	00a448fe5de6c       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   21 seconds ago      Running             kube-apiserver            2                   ad44895fc7f80       kube-apiserver-pause-594992
	9ff734496e3ee       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   21 seconds ago      Running             etcd                      2                   ad92e06f04c03       etcd-pause-594992
	495c73deed76c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   36 seconds ago      Exited              coredns                   1                   fa8847f02f663       coredns-7c65d6cfc9-x529d
	5d4b65f2c05d5       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   37 seconds ago      Exited              kube-scheduler            1                   df80148bdf512       kube-scheduler-pause-594992
	03439c2516853       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   37 seconds ago      Exited              kube-proxy                1                   783af0822a6e8       kube-proxy-jxr6b
	520bd43d560d0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   37 seconds ago      Exited              etcd                      1                   ad92e06f04c03       etcd-pause-594992
	b496d56cafd2d       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   37 seconds ago      Exited              kube-controller-manager   1                   51ea6c16af67b       kube-controller-manager-pause-594992
	8ad4a3ea6f362       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   37 seconds ago      Exited              kube-apiserver            1                   ad44895fc7f80       kube-apiserver-pause-594992
	
	
	==> coredns [495c73deed76c0d6dbdfd63738005f5bdcb035585abe2d2bf533e9fc5990d163] <==
	
	
	==> coredns [83504d06bbf4f9abdf00ce5d5e1cc316692b206056054d6f250a9ae65843afbe] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48267 - 31383 "HINFO IN 6921888526448902377.4476785710858164741. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022561039s
	
	
	==> describe nodes <==
	Name:               pause-594992
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-594992
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=pause-594992
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T20_17_08_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 20:17:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-594992
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 20:17:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 20:17:49 +0000   Thu, 05 Dec 2024 20:17:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 20:17:49 +0000   Thu, 05 Dec 2024 20:17:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 20:17:49 +0000   Thu, 05 Dec 2024 20:17:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 20:17:49 +0000   Thu, 05 Dec 2024 20:17:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.246
	  Hostname:    pause-594992
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 7cecd04d00d5479395187937774b0a3a
	  System UUID:                7cecd04d-00d5-4793-9518-7937774b0a3a
	  Boot ID:                    beaf7767-4f39-4420-a660-294211704c2d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-x529d                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     54s
	  kube-system                 etcd-pause-594992                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         59s
	  kube-system                 kube-apiserver-pause-594992             250m (12%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-controller-manager-pause-594992    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-jxr6b                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-pause-594992             100m (5%)     0 (0%)      0 (0%)           0 (0%)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 53s                kube-proxy       
	  Normal  Starting                 17s                kube-proxy       
	  Normal  NodeHasSufficientPID     69s (x7 over 69s)  kubelet          Node pause-594992 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    69s (x8 over 69s)  kubelet          Node pause-594992 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  69s (x8 over 69s)  kubelet          Node pause-594992 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  69s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                59s                kubelet          Node pause-594992 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  59s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  59s                kubelet          Node pause-594992 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s                kubelet          Node pause-594992 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s                kubelet          Node pause-594992 status is now: NodeHasSufficientPID
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           55s                node-controller  Node pause-594992 event: Registered Node pause-594992 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node pause-594992 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node pause-594992 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node pause-594992 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15s                node-controller  Node pause-594992 event: Registered Node pause-594992 in Controller
	
	
	==> dmesg <==
	[ +10.579173] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.067290] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.082615] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.194178] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.145463] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.329912] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +4.652389] systemd-fstab-generator[740]: Ignoring "noauto" option for root device
	[  +0.063280] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.417088] systemd-fstab-generator[875]: Ignoring "noauto" option for root device
	[  +1.003817] kauditd_printk_skb: 57 callbacks suppressed
	[Dec 5 20:17] kauditd_printk_skb: 30 callbacks suppressed
	[  +1.107554] systemd-fstab-generator[1220]: Ignoring "noauto" option for root device
	[  +4.615621] systemd-fstab-generator[1350]: Ignoring "noauto" option for root device
	[  +0.094239] kauditd_printk_skb: 15 callbacks suppressed
	[ +14.703075] systemd-fstab-generator[2017]: Ignoring "noauto" option for root device
	[  +0.070069] kauditd_printk_skb: 67 callbacks suppressed
	[  +0.060934] systemd-fstab-generator[2029]: Ignoring "noauto" option for root device
	[  +0.166088] systemd-fstab-generator[2043]: Ignoring "noauto" option for root device
	[  +0.143077] systemd-fstab-generator[2055]: Ignoring "noauto" option for root device
	[  +0.335172] systemd-fstab-generator[2084]: Ignoring "noauto" option for root device
	[  +1.114512] systemd-fstab-generator[2208]: Ignoring "noauto" option for root device
	[  +4.387595] kauditd_printk_skb: 196 callbacks suppressed
	[ +11.802804] systemd-fstab-generator[3063]: Ignoring "noauto" option for root device
	[  +7.462104] kauditd_printk_skb: 51 callbacks suppressed
	[Dec 5 20:18] systemd-fstab-generator[3513]: Ignoring "noauto" option for root device
	
	
	==> etcd [520bd43d560d042506a61ee26beabaae5115f81728340ced635de2657d5fea4f] <==
	{"level":"info","ts":"2024-12-05T20:17:31.486981Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26a48da650cf9008 received MsgPreVoteResp from 26a48da650cf9008 at term 2"}
	{"level":"info","ts":"2024-12-05T20:17:31.487023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26a48da650cf9008 became candidate at term 3"}
	{"level":"info","ts":"2024-12-05T20:17:31.487047Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26a48da650cf9008 received MsgVoteResp from 26a48da650cf9008 at term 3"}
	{"level":"info","ts":"2024-12-05T20:17:31.487155Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26a48da650cf9008 became leader at term 3"}
	{"level":"info","ts":"2024-12-05T20:17:31.487196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 26a48da650cf9008 elected leader 26a48da650cf9008 at term 3"}
	{"level":"info","ts":"2024-12-05T20:17:31.494457Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T20:17:31.495449Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T20:17:31.496244Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-05T20:17:31.494411Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"26a48da650cf9008","local-member-attributes":"{Name:pause-594992 ClientURLs:[https://192.168.50.246:2379]}","request-path":"/0/members/26a48da650cf9008/attributes","cluster-id":"4445e918310c0aa2","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-05T20:17:31.499632Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T20:17:31.503449Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-05T20:17:31.503523Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-05T20:17:31.504503Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T20:17:31.542352Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.246:2379"}
	{"level":"info","ts":"2024-12-05T20:17:33.535869Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-12-05T20:17:33.535905Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-594992","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.246:2380"],"advertise-client-urls":["https://192.168.50.246:2379"]}
	{"level":"warn","ts":"2024-12-05T20:17:33.535963Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-05T20:17:33.536035Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/12/05 20:17:33 WARNING: [core] [Server #5] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-12-05T20:17:33.577286Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.246:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-05T20:17:33.577512Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.246:2379: use of closed network connection"}
	{"level":"info","ts":"2024-12-05T20:17:33.577833Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"26a48da650cf9008","current-leader-member-id":"26a48da650cf9008"}
	{"level":"info","ts":"2024-12-05T20:17:33.587506Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.50.246:2380"}
	{"level":"info","ts":"2024-12-05T20:17:33.587643Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.50.246:2380"}
	{"level":"info","ts":"2024-12-05T20:17:33.587671Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-594992","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.246:2380"],"advertise-client-urls":["https://192.168.50.246:2379"]}
	
	
	==> etcd [9ff734496e3eebdee5a7cf70c8fac85080bc8736f14b86b7157aa102294a02e7] <==
	{"level":"info","ts":"2024-12-05T20:17:46.526385Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"4445e918310c0aa2","local-member-id":"26a48da650cf9008","added-peer-id":"26a48da650cf9008","added-peer-peer-urls":["https://192.168.50.246:2380"]}
	{"level":"info","ts":"2024-12-05T20:17:46.526477Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4445e918310c0aa2","local-member-id":"26a48da650cf9008","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T20:17:46.526533Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T20:17:46.542389Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T20:17:46.545364Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-05T20:17:46.545637Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"26a48da650cf9008","initial-advertise-peer-urls":["https://192.168.50.246:2380"],"listen-peer-urls":["https://192.168.50.246:2380"],"advertise-client-urls":["https://192.168.50.246:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.246:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-05T20:17:46.545669Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-05T20:17:46.545767Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.246:2380"}
	{"level":"info","ts":"2024-12-05T20:17:46.545776Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.246:2380"}
	{"level":"info","ts":"2024-12-05T20:17:47.485163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26a48da650cf9008 is starting a new election at term 3"}
	{"level":"info","ts":"2024-12-05T20:17:47.485238Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26a48da650cf9008 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-12-05T20:17:47.485271Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26a48da650cf9008 received MsgPreVoteResp from 26a48da650cf9008 at term 3"}
	{"level":"info","ts":"2024-12-05T20:17:47.485287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26a48da650cf9008 became candidate at term 4"}
	{"level":"info","ts":"2024-12-05T20:17:47.485301Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26a48da650cf9008 received MsgVoteResp from 26a48da650cf9008 at term 4"}
	{"level":"info","ts":"2024-12-05T20:17:47.485317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26a48da650cf9008 became leader at term 4"}
	{"level":"info","ts":"2024-12-05T20:17:47.485325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 26a48da650cf9008 elected leader 26a48da650cf9008 at term 4"}
	{"level":"info","ts":"2024-12-05T20:17:47.493334Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"26a48da650cf9008","local-member-attributes":"{Name:pause-594992 ClientURLs:[https://192.168.50.246:2379]}","request-path":"/0/members/26a48da650cf9008/attributes","cluster-id":"4445e918310c0aa2","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-05T20:17:47.493525Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T20:17:47.494391Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T20:17:47.501232Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-05T20:17:47.509144Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T20:17:47.509383Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-05T20:17:47.509418Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-05T20:17:47.510035Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T20:17:47.511003Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.246:2379"}
	
	
	==> kernel <==
	 20:18:07 up 1 min,  0 users,  load average: 1.34, 0.52, 0.19
	Linux pause-594992 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [00a448fe5de6cb26361e360c5526cc48cb624e6557d23a1b481db22406456fec] <==
	I1205 20:17:49.038999       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1205 20:17:49.039188       1 aggregator.go:171] initial CRD sync complete...
	I1205 20:17:49.043602       1 autoregister_controller.go:144] Starting autoregister controller
	I1205 20:17:49.043709       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1205 20:17:49.095375       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1205 20:17:49.095668       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1205 20:17:49.095714       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1205 20:17:49.097606       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1205 20:17:49.097838       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1205 20:17:49.097870       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1205 20:17:49.098334       1 shared_informer.go:320] Caches are synced for configmaps
	I1205 20:17:49.100165       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1205 20:17:49.100834       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1205 20:17:49.123021       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1205 20:17:49.123208       1 policy_source.go:224] refreshing policies
	I1205 20:17:49.132969       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 20:17:49.143843       1 cache.go:39] Caches are synced for autoregister controller
	I1205 20:17:50.000503       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 20:17:50.952491       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1205 20:17:50.967514       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1205 20:17:51.019565       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1205 20:17:51.077240       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 20:17:51.091816       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 20:17:52.717854       1 controller.go:615] quota admission added evaluator for: endpoints
	I1205 20:17:52.767875       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [8ad4a3ea6f36235b3d837268ecdefb24951435a4edf008b112588ba3f5f83916] <==
	W1205 20:17:43.081011       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.081324       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.086876       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.090331       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.101761       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.155346       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.162952       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.168418       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.201385       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.222197       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.312513       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.369567       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.397445       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.451504       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.479025       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.500633       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.550200       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.570905       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.584307       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.601323       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.659291       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.682436       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.688242       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.712844       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:17:43.783413       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [b496d56cafd2d6a7afe7553c461e588c295e2f6ff2764a4a06e194e1d20399cb] <==
	I1205 20:17:31.292795       1 serving.go:386] Generated self-signed cert in-memory
	I1205 20:17:31.768964       1 controllermanager.go:197] "Starting" version="v1.31.2"
	I1205 20:17:31.769020       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:17:31.779987       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1205 20:17:31.780284       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1205 20:17:31.780342       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1205 20:17:31.780490       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-controller-manager [d59948e40715f67533ef4cf36b4f0b69a232f601aafabaeb14485b8e28c2e41a] <==
	I1205 20:17:52.421432       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1205 20:17:52.421462       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1205 20:17:52.421597       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-594992"
	I1205 20:17:52.431209       1 shared_informer.go:320] Caches are synced for service account
	I1205 20:17:52.450549       1 shared_informer.go:320] Caches are synced for daemon sets
	I1205 20:17:52.453856       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1205 20:17:52.457641       1 shared_informer.go:320] Caches are synced for PVC protection
	I1205 20:17:52.462400       1 shared_informer.go:320] Caches are synced for stateful set
	I1205 20:17:52.465255       1 shared_informer.go:320] Caches are synced for ephemeral
	I1205 20:17:52.474194       1 shared_informer.go:320] Caches are synced for attach detach
	I1205 20:17:52.477262       1 shared_informer.go:320] Caches are synced for expand
	I1205 20:17:52.485036       1 shared_informer.go:320] Caches are synced for crt configmap
	I1205 20:17:52.496431       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1205 20:17:52.514268       1 shared_informer.go:320] Caches are synced for persistent volume
	I1205 20:17:52.631197       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="236.812057ms"
	I1205 20:17:52.631366       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="71.094µs"
	I1205 20:17:52.633162       1 shared_informer.go:320] Caches are synced for resource quota
	I1205 20:17:52.638112       1 shared_informer.go:320] Caches are synced for resource quota
	I1205 20:17:52.665442       1 shared_informer.go:320] Caches are synced for taint
	I1205 20:17:52.665673       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1205 20:17:52.665799       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-594992"
	I1205 20:17:52.665850       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1205 20:17:53.047589       1 shared_informer.go:320] Caches are synced for garbage collector
	I1205 20:17:53.047613       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1205 20:17:53.057618       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [03439c2516853cca606e7485a51dbd0b7d6d1c2eeb7f602460f4f7399f17ef0b] <==
	I1205 20:17:31.245123       1 server_linux.go:66] "Using iptables proxy"
	E1205 20:17:31.360229       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 20:17:31.613242       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1205 20:17:33.161621       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.246"]
	E1205 20:17:33.173249       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 20:17:33.329727       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 20:17:33.329972       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 20:17:33.330031       1 server_linux.go:169] "Using iptables Proxier"
	
	
	==> kube-proxy [4f0ab945fce0cfc411e685e34d69b7861c565af3b31ef78157c39cb1a4526b3a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 20:17:50.150397       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1205 20:17:50.163991       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.246"]
	E1205 20:17:50.165547       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 20:17:50.229384       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 20:17:50.229420       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 20:17:50.229443       1 server_linux.go:169] "Using iptables Proxier"
	I1205 20:17:50.232414       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 20:17:50.232724       1 server.go:483] "Version info" version="v1.31.2"
	I1205 20:17:50.232985       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:17:50.234292       1 config.go:199] "Starting service config controller"
	I1205 20:17:50.234348       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 20:17:50.234388       1 config.go:105] "Starting endpoint slice config controller"
	I1205 20:17:50.234404       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 20:17:50.234873       1 config.go:328] "Starting node config controller"
	I1205 20:17:50.234911       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 20:17:50.334657       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 20:17:50.334770       1 shared_informer.go:320] Caches are synced for service config
	I1205 20:17:50.335209       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5d4b65f2c05d5039cdee981da2fec37671762524ea220af215394d893a9d090e] <==
	I1205 20:17:31.641825       1 serving.go:386] Generated self-signed cert in-memory
	I1205 20:17:33.204041       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1205 20:17:33.208262       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1205 20:17:33.214599       1 secure_serving.go:111] Initial population of client CA failed: Get "https://192.168.50.246:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": context canceled
	I1205 20:17:33.215056       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E1205 20:17:33.220336       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	E1205 20:17:33.220480       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f404bc68434b2921a7d68bd1fa4bc5aa9a8bd64bfaa3476c53afb80d203fe4c8] <==
	I1205 20:17:47.160721       1 serving.go:386] Generated self-signed cert in-memory
	W1205 20:17:49.012742       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1205 20:17:49.012782       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 20:17:49.012791       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1205 20:17:49.012797       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1205 20:17:49.072963       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1205 20:17:49.073009       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:17:49.075333       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1205 20:17:49.075441       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 20:17:49.075459       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1205 20:17:49.075471       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1205 20:17:49.175587       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 05 20:17:46 pause-594992 kubelet[3070]: I1205 20:17:46.196118    3070 scope.go:117] "RemoveContainer" containerID="520bd43d560d042506a61ee26beabaae5115f81728340ced635de2657d5fea4f"
	Dec 05 20:17:46 pause-594992 kubelet[3070]: I1205 20:17:46.197457    3070 scope.go:117] "RemoveContainer" containerID="8ad4a3ea6f36235b3d837268ecdefb24951435a4edf008b112588ba3f5f83916"
	Dec 05 20:17:46 pause-594992 kubelet[3070]: I1205 20:17:46.197850    3070 scope.go:117] "RemoveContainer" containerID="b496d56cafd2d6a7afe7553c461e588c295e2f6ff2764a4a06e194e1d20399cb"
	Dec 05 20:17:46 pause-594992 kubelet[3070]: I1205 20:17:46.205482    3070 scope.go:117] "RemoveContainer" containerID="5d4b65f2c05d5039cdee981da2fec37671762524ea220af215394d893a9d090e"
	Dec 05 20:17:46 pause-594992 kubelet[3070]: I1205 20:17:46.418572    3070 kubelet_node_status.go:72] "Attempting to register node" node="pause-594992"
	Dec 05 20:17:46 pause-594992 kubelet[3070]: E1205 20:17:46.420732    3070 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.246:8443: connect: connection refused" node="pause-594992"
	Dec 05 20:17:46 pause-594992 kubelet[3070]: W1205 20:17:46.453423    3070 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.50.246:8443: connect: connection refused
	Dec 05 20:17:46 pause-594992 kubelet[3070]: E1205 20:17:46.453511    3070 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.50.246:8443: connect: connection refused" logger="UnhandledError"
	Dec 05 20:17:46 pause-594992 kubelet[3070]: W1205 20:17:46.540237    3070 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.246:8443: connect: connection refused
	Dec 05 20:17:46 pause-594992 kubelet[3070]: E1205 20:17:46.540332    3070 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.50.246:8443: connect: connection refused" logger="UnhandledError"
	Dec 05 20:17:47 pause-594992 kubelet[3070]: I1205 20:17:47.222815    3070 kubelet_node_status.go:72] "Attempting to register node" node="pause-594992"
	Dec 05 20:17:49 pause-594992 kubelet[3070]: I1205 20:17:49.162004    3070 kubelet_node_status.go:111] "Node was previously registered" node="pause-594992"
	Dec 05 20:17:49 pause-594992 kubelet[3070]: I1205 20:17:49.162136    3070 kubelet_node_status.go:75] "Successfully registered node" node="pause-594992"
	Dec 05 20:17:49 pause-594992 kubelet[3070]: I1205 20:17:49.162167    3070 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 05 20:17:49 pause-594992 kubelet[3070]: I1205 20:17:49.163348    3070 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 05 20:17:49 pause-594992 kubelet[3070]: I1205 20:17:49.534643    3070 apiserver.go:52] "Watching apiserver"
	Dec 05 20:17:49 pause-594992 kubelet[3070]: I1205 20:17:49.574266    3070 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 05 20:17:49 pause-594992 kubelet[3070]: I1205 20:17:49.655149    3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45d94ddc-c393-4083-807d-febc10b83bd5-lib-modules\") pod \"kube-proxy-jxr6b\" (UID: \"45d94ddc-c393-4083-807d-febc10b83bd5\") " pod="kube-system/kube-proxy-jxr6b"
	Dec 05 20:17:49 pause-594992 kubelet[3070]: I1205 20:17:49.655250    3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45d94ddc-c393-4083-807d-febc10b83bd5-xtables-lock\") pod \"kube-proxy-jxr6b\" (UID: \"45d94ddc-c393-4083-807d-febc10b83bd5\") " pod="kube-system/kube-proxy-jxr6b"
	Dec 05 20:17:49 pause-594992 kubelet[3070]: I1205 20:17:49.840634    3070 scope.go:117] "RemoveContainer" containerID="495c73deed76c0d6dbdfd63738005f5bdcb035585abe2d2bf533e9fc5990d163"
	Dec 05 20:17:49 pause-594992 kubelet[3070]: I1205 20:17:49.843787    3070 scope.go:117] "RemoveContainer" containerID="03439c2516853cca606e7485a51dbd0b7d6d1c2eeb7f602460f4f7399f17ef0b"
	Dec 05 20:17:55 pause-594992 kubelet[3070]: E1205 20:17:55.726927    3070 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733429875726658868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:17:55 pause-594992 kubelet[3070]: E1205 20:17:55.726979    3070 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733429875726658868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:18:05 pause-594992 kubelet[3070]: E1205 20:18:05.732721    3070 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733429885732312972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:18:05 pause-594992 kubelet[3070]: E1205 20:18:05.732754    3070 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733429885732312972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-594992 -n pause-594992
helpers_test.go:261: (dbg) Run:  kubectl --context pause-594992 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (48.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (332.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-386085 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-386085 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (5m32.218564643s)

                                                
                                                
-- stdout --
	* [old-k8s-version-386085] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20052
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-386085" primary control-plane node in "old-k8s-version-386085" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:19:43.910599  581232 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:19:43.910874  581232 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:19:43.910885  581232 out.go:358] Setting ErrFile to fd 2...
	I1205 20:19:43.910892  581232 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:19:43.911106  581232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 20:19:43.911727  581232 out.go:352] Setting JSON to false
	I1205 20:19:43.912790  581232 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":10930,"bootTime":1733419054,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:19:43.912915  581232 start.go:139] virtualization: kvm guest
	I1205 20:19:43.915227  581232 out.go:177] * [old-k8s-version-386085] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:19:43.917220  581232 notify.go:220] Checking for updates...
	I1205 20:19:43.917235  581232 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 20:19:43.919471  581232 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:19:43.920815  581232 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:19:43.922314  581232 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 20:19:43.923558  581232 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:19:43.924843  581232 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:19:43.926859  581232 config.go:182] Loaded profile config "cert-expiration-315387": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:19:43.927042  581232 config.go:182] Loaded profile config "cert-options-790679": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:19:43.927191  581232 config.go:182] Loaded profile config "kubernetes-upgrade-886958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:19:43.927353  581232 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:19:43.974508  581232 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 20:19:43.975997  581232 start.go:297] selected driver: kvm2
	I1205 20:19:43.976021  581232 start.go:901] validating driver "kvm2" against <nil>
	I1205 20:19:43.976039  581232 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:19:43.977096  581232 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:19:43.977202  581232 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20052-530897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:19:43.992940  581232 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 20:19:43.993013  581232 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 20:19:43.993266  581232 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:19:43.993301  581232 cni.go:84] Creating CNI manager for ""
	I1205 20:19:43.993345  581232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:19:43.993354  581232 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 20:19:43.993404  581232 start.go:340] cluster config:
	{Name:old-k8s-version-386085 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386085 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:19:43.993503  581232 iso.go:125] acquiring lock: {Name:mk778929df466edaca8cb6d38427acedfae32b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:19:43.995272  581232 out.go:177] * Starting "old-k8s-version-386085" primary control-plane node in "old-k8s-version-386085" cluster
	I1205 20:19:43.996521  581232 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 20:19:43.996573  581232 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1205 20:19:43.996585  581232 cache.go:56] Caching tarball of preloaded images
	I1205 20:19:43.996670  581232 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:19:43.996685  581232 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1205 20:19:43.996795  581232 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/config.json ...
	I1205 20:19:43.996822  581232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/config.json: {Name:mk250ec5a0a24bc7c846422c94ff3b2d456333aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:19:43.996981  581232 start.go:360] acquireMachinesLock for old-k8s-version-386085: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:20:43.065605  581232 start.go:364] duration metric: took 59.068526282s to acquireMachinesLock for "old-k8s-version-386085"
	I1205 20:20:43.065698  581232 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-386085 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386085 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:20:43.065833  581232 start.go:125] createHost starting for "" (driver="kvm2")
	I1205 20:20:43.067744  581232 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 20:20:43.068002  581232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:43.068064  581232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:43.085653  581232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42521
	I1205 20:20:43.086106  581232 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:43.086719  581232 main.go:141] libmachine: Using API Version  1
	I1205 20:20:43.086750  581232 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:43.087214  581232 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:43.087431  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetMachineName
	I1205 20:20:43.087591  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:20:43.087750  581232 start.go:159] libmachine.API.Create for "old-k8s-version-386085" (driver="kvm2")
	I1205 20:20:43.087785  581232 client.go:168] LocalClient.Create starting
	I1205 20:20:43.087814  581232 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem
	I1205 20:20:43.087861  581232 main.go:141] libmachine: Decoding PEM data...
	I1205 20:20:43.087890  581232 main.go:141] libmachine: Parsing certificate...
	I1205 20:20:43.087982  581232 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem
	I1205 20:20:43.088014  581232 main.go:141] libmachine: Decoding PEM data...
	I1205 20:20:43.088036  581232 main.go:141] libmachine: Parsing certificate...
	I1205 20:20:43.088060  581232 main.go:141] libmachine: Running pre-create checks...
	I1205 20:20:43.088080  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .PreCreateCheck
	I1205 20:20:43.088515  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetConfigRaw
	I1205 20:20:43.088941  581232 main.go:141] libmachine: Creating machine...
	I1205 20:20:43.088955  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .Create
	I1205 20:20:43.089096  581232 main.go:141] libmachine: (old-k8s-version-386085) Creating KVM machine...
	I1205 20:20:43.090380  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found existing default KVM network
	I1205 20:20:43.092456  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:20:43.092232  581908 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:f5:aa:7e} reservation:<nil>}
	I1205 20:20:43.093463  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:20:43.093357  581908 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:6a:77:44} reservation:<nil>}
	I1205 20:20:43.094667  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:20:43.094585  581908 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:b8:a6:e9} reservation:<nil>}
	I1205 20:20:43.095983  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:20:43.095880  581908 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a5c30}
	I1205 20:20:43.096011  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | created network xml: 
	I1205 20:20:43.096046  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | <network>
	I1205 20:20:43.096073  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG |   <name>mk-old-k8s-version-386085</name>
	I1205 20:20:43.096084  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG |   <dns enable='no'/>
	I1205 20:20:43.096094  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG |   
	I1205 20:20:43.096105  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1205 20:20:43.096123  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG |     <dhcp>
	I1205 20:20:43.096131  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1205 20:20:43.096138  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG |     </dhcp>
	I1205 20:20:43.096149  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG |   </ip>
	I1205 20:20:43.096160  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG |   
	I1205 20:20:43.096169  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | </network>
	I1205 20:20:43.096185  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | 
	I1205 20:20:43.107425  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | trying to create private KVM network mk-old-k8s-version-386085 192.168.72.0/24...
	I1205 20:20:43.184292  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | private KVM network mk-old-k8s-version-386085 192.168.72.0/24 created
	I1205 20:20:43.184334  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:20:43.184223  581908 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 20:20:43.184373  581232 main.go:141] libmachine: (old-k8s-version-386085) Setting up store path in /home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085 ...
	I1205 20:20:43.184403  581232 main.go:141] libmachine: (old-k8s-version-386085) Building disk image from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 20:20:43.184421  581232 main.go:141] libmachine: (old-k8s-version-386085) Downloading /home/jenkins/minikube-integration/20052-530897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 20:20:43.470769  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:20:43.470561  581908 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa...
	I1205 20:20:43.633138  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:20:43.632979  581908 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/old-k8s-version-386085.rawdisk...
	I1205 20:20:43.633177  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | Writing magic tar header
	I1205 20:20:43.633201  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | Writing SSH key tar header
	I1205 20:20:43.633240  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:20:43.633109  581908 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085 ...
	I1205 20:20:43.633264  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085
	I1205 20:20:43.633282  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines
	I1205 20:20:43.633296  581232 main.go:141] libmachine: (old-k8s-version-386085) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085 (perms=drwx------)
	I1205 20:20:43.633310  581232 main.go:141] libmachine: (old-k8s-version-386085) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines (perms=drwxr-xr-x)
	I1205 20:20:43.633320  581232 main.go:141] libmachine: (old-k8s-version-386085) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube (perms=drwxr-xr-x)
	I1205 20:20:43.633344  581232 main.go:141] libmachine: (old-k8s-version-386085) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897 (perms=drwxrwxr-x)
	I1205 20:20:43.633354  581232 main.go:141] libmachine: (old-k8s-version-386085) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 20:20:43.633365  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 20:20:43.633378  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897
	I1205 20:20:43.633387  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 20:20:43.633419  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | Checking permissions on dir: /home/jenkins
	I1205 20:20:43.633446  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | Checking permissions on dir: /home
	I1205 20:20:43.633461  581232 main.go:141] libmachine: (old-k8s-version-386085) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 20:20:43.633478  581232 main.go:141] libmachine: (old-k8s-version-386085) Creating domain...
	I1205 20:20:43.633496  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | Skipping /home - not owner
	I1205 20:20:43.634612  581232 main.go:141] libmachine: (old-k8s-version-386085) define libvirt domain using xml: 
	I1205 20:20:43.634635  581232 main.go:141] libmachine: (old-k8s-version-386085) <domain type='kvm'>
	I1205 20:20:43.634642  581232 main.go:141] libmachine: (old-k8s-version-386085)   <name>old-k8s-version-386085</name>
	I1205 20:20:43.634647  581232 main.go:141] libmachine: (old-k8s-version-386085)   <memory unit='MiB'>2200</memory>
	I1205 20:20:43.634652  581232 main.go:141] libmachine: (old-k8s-version-386085)   <vcpu>2</vcpu>
	I1205 20:20:43.634656  581232 main.go:141] libmachine: (old-k8s-version-386085)   <features>
	I1205 20:20:43.634663  581232 main.go:141] libmachine: (old-k8s-version-386085)     <acpi/>
	I1205 20:20:43.634674  581232 main.go:141] libmachine: (old-k8s-version-386085)     <apic/>
	I1205 20:20:43.634680  581232 main.go:141] libmachine: (old-k8s-version-386085)     <pae/>
	I1205 20:20:43.634690  581232 main.go:141] libmachine: (old-k8s-version-386085)     
	I1205 20:20:43.634700  581232 main.go:141] libmachine: (old-k8s-version-386085)   </features>
	I1205 20:20:43.634710  581232 main.go:141] libmachine: (old-k8s-version-386085)   <cpu mode='host-passthrough'>
	I1205 20:20:43.634718  581232 main.go:141] libmachine: (old-k8s-version-386085)   
	I1205 20:20:43.634728  581232 main.go:141] libmachine: (old-k8s-version-386085)   </cpu>
	I1205 20:20:43.634743  581232 main.go:141] libmachine: (old-k8s-version-386085)   <os>
	I1205 20:20:43.634756  581232 main.go:141] libmachine: (old-k8s-version-386085)     <type>hvm</type>
	I1205 20:20:43.634772  581232 main.go:141] libmachine: (old-k8s-version-386085)     <boot dev='cdrom'/>
	I1205 20:20:43.634786  581232 main.go:141] libmachine: (old-k8s-version-386085)     <boot dev='hd'/>
	I1205 20:20:43.634798  581232 main.go:141] libmachine: (old-k8s-version-386085)     <bootmenu enable='no'/>
	I1205 20:20:43.634808  581232 main.go:141] libmachine: (old-k8s-version-386085)   </os>
	I1205 20:20:43.634817  581232 main.go:141] libmachine: (old-k8s-version-386085)   <devices>
	I1205 20:20:43.634827  581232 main.go:141] libmachine: (old-k8s-version-386085)     <disk type='file' device='cdrom'>
	I1205 20:20:43.634846  581232 main.go:141] libmachine: (old-k8s-version-386085)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/boot2docker.iso'/>
	I1205 20:20:43.634872  581232 main.go:141] libmachine: (old-k8s-version-386085)       <target dev='hdc' bus='scsi'/>
	I1205 20:20:43.634884  581232 main.go:141] libmachine: (old-k8s-version-386085)       <readonly/>
	I1205 20:20:43.634894  581232 main.go:141] libmachine: (old-k8s-version-386085)     </disk>
	I1205 20:20:43.634905  581232 main.go:141] libmachine: (old-k8s-version-386085)     <disk type='file' device='disk'>
	I1205 20:20:43.634921  581232 main.go:141] libmachine: (old-k8s-version-386085)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 20:20:43.634936  581232 main.go:141] libmachine: (old-k8s-version-386085)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/old-k8s-version-386085.rawdisk'/>
	I1205 20:20:43.634947  581232 main.go:141] libmachine: (old-k8s-version-386085)       <target dev='hda' bus='virtio'/>
	I1205 20:20:43.634965  581232 main.go:141] libmachine: (old-k8s-version-386085)     </disk>
	I1205 20:20:43.634976  581232 main.go:141] libmachine: (old-k8s-version-386085)     <interface type='network'>
	I1205 20:20:43.634984  581232 main.go:141] libmachine: (old-k8s-version-386085)       <source network='mk-old-k8s-version-386085'/>
	I1205 20:20:43.634993  581232 main.go:141] libmachine: (old-k8s-version-386085)       <model type='virtio'/>
	I1205 20:20:43.635003  581232 main.go:141] libmachine: (old-k8s-version-386085)     </interface>
	I1205 20:20:43.635012  581232 main.go:141] libmachine: (old-k8s-version-386085)     <interface type='network'>
	I1205 20:20:43.635024  581232 main.go:141] libmachine: (old-k8s-version-386085)       <source network='default'/>
	I1205 20:20:43.635060  581232 main.go:141] libmachine: (old-k8s-version-386085)       <model type='virtio'/>
	I1205 20:20:43.635085  581232 main.go:141] libmachine: (old-k8s-version-386085)     </interface>
	I1205 20:20:43.635094  581232 main.go:141] libmachine: (old-k8s-version-386085)     <serial type='pty'>
	I1205 20:20:43.635103  581232 main.go:141] libmachine: (old-k8s-version-386085)       <target port='0'/>
	I1205 20:20:43.635113  581232 main.go:141] libmachine: (old-k8s-version-386085)     </serial>
	I1205 20:20:43.635120  581232 main.go:141] libmachine: (old-k8s-version-386085)     <console type='pty'>
	I1205 20:20:43.635126  581232 main.go:141] libmachine: (old-k8s-version-386085)       <target type='serial' port='0'/>
	I1205 20:20:43.635133  581232 main.go:141] libmachine: (old-k8s-version-386085)     </console>
	I1205 20:20:43.635139  581232 main.go:141] libmachine: (old-k8s-version-386085)     <rng model='virtio'>
	I1205 20:20:43.635147  581232 main.go:141] libmachine: (old-k8s-version-386085)       <backend model='random'>/dev/random</backend>
	I1205 20:20:43.635183  581232 main.go:141] libmachine: (old-k8s-version-386085)     </rng>
	I1205 20:20:43.635208  581232 main.go:141] libmachine: (old-k8s-version-386085)     
	I1205 20:20:43.635220  581232 main.go:141] libmachine: (old-k8s-version-386085)     
	I1205 20:20:43.635229  581232 main.go:141] libmachine: (old-k8s-version-386085)   </devices>
	I1205 20:20:43.635240  581232 main.go:141] libmachine: (old-k8s-version-386085) </domain>
	I1205 20:20:43.635250  581232 main.go:141] libmachine: (old-k8s-version-386085) 
	I1205 20:20:43.639747  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:7d:61:fd in network default
	I1205 20:20:43.640495  581232 main.go:141] libmachine: (old-k8s-version-386085) Ensuring networks are active...
	I1205 20:20:43.640521  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:20:43.641406  581232 main.go:141] libmachine: (old-k8s-version-386085) Ensuring network default is active
	I1205 20:20:43.641781  581232 main.go:141] libmachine: (old-k8s-version-386085) Ensuring network mk-old-k8s-version-386085 is active
	I1205 20:20:43.642500  581232 main.go:141] libmachine: (old-k8s-version-386085) Getting domain xml...
	I1205 20:20:43.643597  581232 main.go:141] libmachine: (old-k8s-version-386085) Creating domain...
	I1205 20:20:44.983822  581232 main.go:141] libmachine: (old-k8s-version-386085) Waiting to get IP...
	I1205 20:20:44.985230  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:20:44.986902  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:20:44.986935  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:20:44.986864  581908 retry.go:31] will retry after 236.164365ms: waiting for machine to come up
	I1205 20:20:45.224442  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:20:45.225150  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:20:45.225180  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:20:45.225097  581908 retry.go:31] will retry after 299.031505ms: waiting for machine to come up
	I1205 20:20:45.525707  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:20:45.535180  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:20:45.535215  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:20:45.535142  581908 retry.go:31] will retry after 482.937753ms: waiting for machine to come up
	I1205 20:20:46.020072  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:20:46.021465  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:20:46.021508  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:20:46.021396  581908 retry.go:31] will retry after 591.679902ms: waiting for machine to come up
	I1205 20:20:46.615446  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:20:46.616003  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:20:46.616037  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:20:46.615953  581908 retry.go:31] will retry after 699.21032ms: waiting for machine to come up
	I1205 20:20:47.317096  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:20:47.317571  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:20:47.317626  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:20:47.317539  581908 retry.go:31] will retry after 925.517837ms: waiting for machine to come up
	I1205 20:20:48.245125  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:20:48.245736  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:20:48.245770  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:20:48.245691  581908 retry.go:31] will retry after 766.352404ms: waiting for machine to come up
	I1205 20:20:49.013673  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:20:49.014159  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:20:49.014191  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:20:49.014104  581908 retry.go:31] will retry after 1.324131642s: waiting for machine to come up
	I1205 20:20:50.340873  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:20:50.341340  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:20:50.341365  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:20:50.341298  581908 retry.go:31] will retry after 1.710981043s: waiting for machine to come up
	I1205 20:20:52.053673  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:20:52.054208  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:20:52.054230  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:20:52.054134  581908 retry.go:31] will retry after 1.773145165s: waiting for machine to come up
	I1205 20:20:53.828938  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:20:53.829421  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:20:53.829460  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:20:53.829367  581908 retry.go:31] will retry after 2.633576834s: waiting for machine to come up
	I1205 20:20:56.465878  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:20:56.466497  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:20:56.466532  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:20:56.466440  581908 retry.go:31] will retry after 3.247630624s: waiting for machine to come up
	I1205 20:20:59.715775  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:20:59.716340  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:20:59.716369  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:20:59.716305  581908 retry.go:31] will retry after 3.389498215s: waiting for machine to come up
	I1205 20:21:03.107995  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:03.108504  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:21:03.108556  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:21:03.108453  581908 retry.go:31] will retry after 4.383898803s: waiting for machine to come up
	I1205 20:21:07.494584  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:07.495105  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has current primary IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:07.495134  581232 main.go:141] libmachine: (old-k8s-version-386085) Found IP for machine: 192.168.72.144
	I1205 20:21:07.495156  581232 main.go:141] libmachine: (old-k8s-version-386085) Reserving static IP address...
	I1205 20:21:07.495523  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-386085", mac: "52:54:00:6a:06:a4", ip: "192.168.72.144"} in network mk-old-k8s-version-386085
	I1205 20:21:07.574494  581232 main.go:141] libmachine: (old-k8s-version-386085) Reserved static IP address: 192.168.72.144
	I1205 20:21:07.574530  581232 main.go:141] libmachine: (old-k8s-version-386085) Waiting for SSH to be available...
	I1205 20:21:07.574539  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | Getting to WaitForSSH function...
	I1205 20:21:07.577431  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:07.577829  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:07.577864  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:07.577948  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | Using SSH client type: external
	I1205 20:21:07.577972  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa (-rw-------)
	I1205 20:21:07.578011  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:21:07.578024  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | About to run SSH command:
	I1205 20:21:07.578035  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | exit 0
	I1205 20:21:07.704757  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | SSH cmd err, output: <nil>: 
	I1205 20:21:07.705116  581232 main.go:141] libmachine: (old-k8s-version-386085) KVM machine creation complete!
	I1205 20:21:07.705414  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetConfigRaw
	I1205 20:21:07.706017  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:21:07.706260  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:21:07.706466  581232 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 20:21:07.706487  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetState
	I1205 20:21:07.707864  581232 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 20:21:07.707883  581232 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 20:21:07.707891  581232 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 20:21:07.707899  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:21:07.710311  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:07.710665  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:07.710694  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:07.710808  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:21:07.710999  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:07.711171  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:07.711297  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:21:07.711467  581232 main.go:141] libmachine: Using SSH client type: native
	I1205 20:21:07.711677  581232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:21:07.711693  581232 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 20:21:07.823658  581232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:21:07.823683  581232 main.go:141] libmachine: Detecting the provisioner...
	I1205 20:21:07.823704  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:21:07.826564  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:07.826917  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:07.826951  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:07.827059  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:21:07.827293  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:07.827462  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:07.827626  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:21:07.827767  581232 main.go:141] libmachine: Using SSH client type: native
	I1205 20:21:07.827949  581232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:21:07.827960  581232 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 20:21:07.937280  581232 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 20:21:07.937409  581232 main.go:141] libmachine: found compatible host: buildroot
	I1205 20:21:07.937422  581232 main.go:141] libmachine: Provisioning with buildroot...
	I1205 20:21:07.937431  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetMachineName
	I1205 20:21:07.937697  581232 buildroot.go:166] provisioning hostname "old-k8s-version-386085"
	I1205 20:21:07.937709  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetMachineName
	I1205 20:21:07.937878  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:21:07.940420  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:07.940734  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:07.940777  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:07.940883  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:21:07.941098  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:07.941242  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:07.941373  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:21:07.941512  581232 main.go:141] libmachine: Using SSH client type: native
	I1205 20:21:07.941695  581232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:21:07.941707  581232 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-386085 && echo "old-k8s-version-386085" | sudo tee /etc/hostname
	I1205 20:21:08.070277  581232 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-386085
	
	I1205 20:21:08.070339  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:21:08.073960  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:08.074511  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:08.074547  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:08.074763  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:21:08.075027  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:08.075266  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:08.075447  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:21:08.075697  581232 main.go:141] libmachine: Using SSH client type: native
	I1205 20:21:08.075900  581232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:21:08.075918  581232 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-386085' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-386085/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-386085' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:21:08.195853  581232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:21:08.195898  581232 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:21:08.196009  581232 buildroot.go:174] setting up certificates
	I1205 20:21:08.196032  581232 provision.go:84] configureAuth start
	I1205 20:21:08.196054  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetMachineName
	I1205 20:21:08.196382  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:21:08.199491  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:08.199778  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:08.199801  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:08.199958  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:21:08.202612  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:08.203010  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:08.203035  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:08.203257  581232 provision.go:143] copyHostCerts
	I1205 20:21:08.203337  581232 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:21:08.203362  581232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:21:08.203424  581232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:21:08.203539  581232 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:21:08.203550  581232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:21:08.203571  581232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:21:08.203637  581232 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:21:08.203645  581232 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:21:08.203663  581232 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:21:08.203723  581232 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-386085 san=[127.0.0.1 192.168.72.144 localhost minikube old-k8s-version-386085]
	I1205 20:21:08.616043  581232 provision.go:177] copyRemoteCerts
	I1205 20:21:08.616123  581232 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:21:08.616154  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:21:08.619707  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:08.620149  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:08.620177  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:08.620431  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:21:08.620682  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:08.620858  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:21:08.621024  581232 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:21:08.707529  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:21:08.735540  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1205 20:21:08.762216  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:21:08.787370  581232 provision.go:87] duration metric: took 591.317131ms to configureAuth
	I1205 20:21:08.787406  581232 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:21:08.787580  581232 config.go:182] Loaded profile config "old-k8s-version-386085": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 20:21:08.787674  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:21:08.790700  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:08.790984  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:08.791019  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:08.791168  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:21:08.791402  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:08.791575  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:08.791727  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:21:08.791918  581232 main.go:141] libmachine: Using SSH client type: native
	I1205 20:21:08.792153  581232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:21:08.792174  581232 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:21:09.033410  581232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:21:09.033456  581232 main.go:141] libmachine: Checking connection to Docker...
	I1205 20:21:09.033470  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetURL
	I1205 20:21:09.034850  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | Using libvirt version 6000000
	I1205 20:21:09.037053  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:09.037381  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:09.037419  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:09.037664  581232 main.go:141] libmachine: Docker is up and running!
	I1205 20:21:09.037680  581232 main.go:141] libmachine: Reticulating splines...
	I1205 20:21:09.037688  581232 client.go:171] duration metric: took 25.94989327s to LocalClient.Create
	I1205 20:21:09.037710  581232 start.go:167] duration metric: took 25.94996229s to libmachine.API.Create "old-k8s-version-386085"
	I1205 20:21:09.037720  581232 start.go:293] postStartSetup for "old-k8s-version-386085" (driver="kvm2")
	I1205 20:21:09.037731  581232 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:21:09.037751  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:21:09.038012  581232 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:21:09.038040  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:21:09.040077  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:09.040435  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:09.040461  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:09.040671  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:21:09.040851  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:09.041004  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:21:09.041165  581232 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:21:09.123406  581232 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:21:09.134277  581232 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:21:09.134307  581232 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:21:09.134382  581232 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:21:09.134501  581232 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:21:09.134685  581232 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:21:09.145011  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:21:09.169202  581232 start.go:296] duration metric: took 131.464611ms for postStartSetup
	I1205 20:21:09.169267  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetConfigRaw
	I1205 20:21:09.169881  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:21:09.172535  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:09.172799  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:09.172824  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:09.173121  581232 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/config.json ...
	I1205 20:21:09.173329  581232 start.go:128] duration metric: took 26.107477694s to createHost
	I1205 20:21:09.173353  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:21:09.175715  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:09.176049  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:09.176083  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:09.176317  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:21:09.176509  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:09.176653  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:09.176792  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:21:09.176924  581232 main.go:141] libmachine: Using SSH client type: native
	I1205 20:21:09.177093  581232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:21:09.177103  581232 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:21:09.285520  581232 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430069.248530299
	
	I1205 20:21:09.285546  581232 fix.go:216] guest clock: 1733430069.248530299
	I1205 20:21:09.285555  581232 fix.go:229] Guest: 2024-12-05 20:21:09.248530299 +0000 UTC Remote: 2024-12-05 20:21:09.173342326 +0000 UTC m=+85.302458541 (delta=75.187973ms)
	I1205 20:21:09.285582  581232 fix.go:200] guest clock delta is within tolerance: 75.187973ms
	I1205 20:21:09.285589  581232 start.go:83] releasing machines lock for "old-k8s-version-386085", held for 26.21993585s
	I1205 20:21:09.285621  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:21:09.285923  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:21:09.289016  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:09.289485  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:09.289521  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:09.289797  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:21:09.290382  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:21:09.290592  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:21:09.290681  581232 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:21:09.290742  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:21:09.290850  581232 ssh_runner.go:195] Run: cat /version.json
	I1205 20:21:09.290879  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:21:09.293689  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:09.293863  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:09.294121  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:09.294150  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:09.294342  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:21:09.294348  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:09.294374  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:09.294533  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:09.294545  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:21:09.294714  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:21:09.294711  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:21:09.294905  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:21:09.294910  581232 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:21:09.295088  581232 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:21:09.374022  581232 ssh_runner.go:195] Run: systemctl --version
	I1205 20:21:09.404694  581232 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:21:09.567611  581232 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:21:09.574634  581232 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:21:09.574721  581232 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:21:09.593608  581232 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:21:09.593638  581232 start.go:495] detecting cgroup driver to use...
	I1205 20:21:09.593729  581232 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:21:09.611604  581232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:21:09.628223  581232 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:21:09.628332  581232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:21:09.643477  581232 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:21:09.659106  581232 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:21:09.787468  581232 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:21:09.941819  581232 docker.go:233] disabling docker service ...
	I1205 20:21:09.941895  581232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:21:09.959710  581232 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:21:09.974459  581232 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:21:10.133136  581232 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:21:10.292148  581232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:21:10.307485  581232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:21:10.329479  581232 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1205 20:21:10.329555  581232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:21:10.341416  581232 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:21:10.341504  581232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:21:10.353522  581232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:21:10.365470  581232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:21:10.377636  581232 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:21:10.390062  581232 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:21:10.400788  581232 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:21:10.400863  581232 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:21:10.415612  581232 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:21:10.426280  581232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:21:10.558914  581232 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:21:10.659230  581232 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:21:10.659325  581232 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:21:10.664562  581232 start.go:563] Will wait 60s for crictl version
	I1205 20:21:10.664642  581232 ssh_runner.go:195] Run: which crictl
	I1205 20:21:10.668647  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:21:10.709856  581232 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:21:10.709961  581232 ssh_runner.go:195] Run: crio --version
	I1205 20:21:10.740609  581232 ssh_runner.go:195] Run: crio --version
	I1205 20:21:10.771324  581232 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1205 20:21:10.772795  581232 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:21:10.775587  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:10.776018  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:20:59 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:21:10.776050  581232 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:21:10.776296  581232 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1205 20:21:10.780713  581232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:21:10.794404  581232 kubeadm.go:883] updating cluster {Name:old-k8s-version-386085 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-386085 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:21:10.794523  581232 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 20:21:10.794581  581232 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:21:10.826595  581232 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 20:21:10.826669  581232 ssh_runner.go:195] Run: which lz4
	I1205 20:21:10.830855  581232 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 20:21:10.835196  581232 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:21:10.835253  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1205 20:21:12.603598  581232 crio.go:462] duration metric: took 1.772775692s to copy over tarball
	I1205 20:21:12.603701  581232 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:21:15.190371  581232 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.586625375s)
	I1205 20:21:15.190404  581232 crio.go:469] duration metric: took 2.586761927s to extract the tarball
	I1205 20:21:15.190415  581232 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:21:15.233842  581232 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:21:15.285647  581232 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 20:21:15.285696  581232 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 20:21:15.285798  581232 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:21:15.285817  581232 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:21:15.285850  581232 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1205 20:21:15.285877  581232 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:21:15.285854  581232 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1205 20:21:15.285943  581232 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:21:15.285946  581232 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:21:15.285817  581232 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:21:15.287809  581232 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1205 20:21:15.287858  581232 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:21:15.287897  581232 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:21:15.287981  581232 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:21:15.288060  581232 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1205 20:21:15.288128  581232 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:21:15.288054  581232 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:21:15.288333  581232 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:21:15.481573  581232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1205 20:21:15.481906  581232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:21:15.489619  581232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1205 20:21:15.502754  581232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:21:15.503222  581232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1205 20:21:15.533915  581232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:21:15.544213  581232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:21:15.613718  581232 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1205 20:21:15.613766  581232 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:21:15.613822  581232 ssh_runner.go:195] Run: which crictl
	I1205 20:21:15.613821  581232 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1205 20:21:15.613860  581232 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1205 20:21:15.613908  581232 ssh_runner.go:195] Run: which crictl
	I1205 20:21:15.677490  581232 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1205 20:21:15.677546  581232 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:21:15.677590  581232 ssh_runner.go:195] Run: which crictl
	I1205 20:21:15.677603  581232 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1205 20:21:15.677645  581232 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:21:15.677655  581232 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1205 20:21:15.677686  581232 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1205 20:21:15.677690  581232 ssh_runner.go:195] Run: which crictl
	I1205 20:21:15.677726  581232 ssh_runner.go:195] Run: which crictl
	I1205 20:21:15.696757  581232 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1205 20:21:15.696816  581232 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:21:15.696828  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:21:15.696840  581232 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1205 20:21:15.696842  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 20:21:15.696856  581232 ssh_runner.go:195] Run: which crictl
	I1205 20:21:15.696862  581232 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:21:15.696892  581232 ssh_runner.go:195] Run: which crictl
	I1205 20:21:15.696899  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 20:21:15.696908  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:21:15.696903  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 20:21:15.805128  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:21:15.805194  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 20:21:15.805194  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:21:15.828044  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:21:15.828069  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 20:21:15.828097  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 20:21:15.828122  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:21:15.953536  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:21:15.953615  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 20:21:15.953677  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:21:16.003327  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 20:21:16.003459  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 20:21:16.003498  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:21:16.003543  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:21:16.095042  581232 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1205 20:21:16.128437  581232 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1205 20:21:16.128597  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:21:16.168072  581232 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1205 20:21:16.168119  581232 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1205 20:21:16.168286  581232 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1205 20:21:16.168312  581232 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:21:16.199436  581232 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1205 20:21:16.224905  581232 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1205 20:21:16.459721  581232 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:21:16.602021  581232 cache_images.go:92] duration metric: took 1.316298413s to LoadCachedImages
	W1205 20:21:16.602121  581232 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I1205 20:21:16.602138  581232 kubeadm.go:934] updating node { 192.168.72.144 8443 v1.20.0 crio true true} ...
	I1205 20:21:16.602276  581232 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-386085 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386085 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:21:16.602366  581232 ssh_runner.go:195] Run: crio config
	I1205 20:21:16.664835  581232 cni.go:84] Creating CNI manager for ""
	I1205 20:21:16.664862  581232 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:21:16.664872  581232 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:21:16.664896  581232 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.144 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-386085 NodeName:old-k8s-version-386085 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1205 20:21:16.665072  581232 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-386085"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:21:16.665144  581232 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1205 20:21:16.676017  581232 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:21:16.676085  581232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:21:16.686073  581232 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1205 20:21:16.705467  581232 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:21:16.723780  581232 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1205 20:21:16.742929  581232 ssh_runner.go:195] Run: grep 192.168.72.144	control-plane.minikube.internal$ /etc/hosts
	I1205 20:21:16.747590  581232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:21:16.761580  581232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:21:16.887750  581232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:21:16.907151  581232 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085 for IP: 192.168.72.144
	I1205 20:21:16.907191  581232 certs.go:194] generating shared ca certs ...
	I1205 20:21:16.907216  581232 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:21:16.907435  581232 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:21:16.907500  581232 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:21:16.907514  581232 certs.go:256] generating profile certs ...
	I1205 20:21:16.907581  581232 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.key
	I1205 20:21:16.907595  581232 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.crt with IP's: []
	I1205 20:21:17.059572  581232 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.crt ...
	I1205 20:21:17.059609  581232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.crt: {Name:mkb3552afa22200472d8cbab774aa7d1cfbbc38e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:21:17.059809  581232 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.key ...
	I1205 20:21:17.059831  581232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.key: {Name:mk4402cc2a008bc8b6e2d9e5c89265948fc7d161 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:21:17.059959  581232 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.key.87b35b18
	I1205 20:21:17.059988  581232 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.crt.87b35b18 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.144]
	I1205 20:21:17.283666  581232 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.crt.87b35b18 ...
	I1205 20:21:17.283710  581232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.crt.87b35b18: {Name:mk95e559c2ab4bdbf5838fd82bcdb5690297f040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:21:17.307051  581232 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.key.87b35b18 ...
	I1205 20:21:17.307129  581232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.key.87b35b18: {Name:mkcf0c9dfca85d1b074169b5536a3904e8d01895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:21:17.307304  581232 certs.go:381] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.crt.87b35b18 -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.crt
	I1205 20:21:17.307399  581232 certs.go:385] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.key.87b35b18 -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.key
	I1205 20:21:17.307467  581232 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.key
	I1205 20:21:17.307487  581232 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.crt with IP's: []
	I1205 20:21:17.754389  581232 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.crt ...
	I1205 20:21:17.754426  581232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.crt: {Name:mkfd0b51fca4395714f1ab65bfd9bca9985b097a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:21:17.754631  581232 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.key ...
	I1205 20:21:17.754654  581232 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.key: {Name:mkbc9f8e3659a6cd377fafc79f8082dfc2d3efd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:21:17.754890  581232 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:21:17.754948  581232 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:21:17.754964  581232 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:21:17.754996  581232 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:21:17.755029  581232 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:21:17.755063  581232 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:21:17.755133  581232 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:21:17.755764  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:21:17.791982  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:21:17.821213  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:21:17.860922  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:21:17.889847  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1205 20:21:17.916460  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:21:17.943618  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:21:17.972693  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:21:18.017986  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:21:18.045526  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:21:18.075197  581232 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:21:18.102963  581232 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:21:18.121793  581232 ssh_runner.go:195] Run: openssl version
	I1205 20:21:18.128680  581232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:21:18.141429  581232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:21:18.147547  581232 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:21:18.147631  581232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:21:18.154645  581232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:21:18.167345  581232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:21:18.179712  581232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:21:18.184996  581232 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:21:18.185070  581232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:21:18.191832  581232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:21:18.204037  581232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:21:18.216442  581232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:21:18.221666  581232 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:21:18.221745  581232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:21:18.228363  581232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:21:18.240097  581232 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:21:18.245323  581232 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 20:21:18.245395  581232 kubeadm.go:392] StartCluster: {Name:old-k8s-version-386085 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-386085 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:21:18.245512  581232 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:21:18.245580  581232 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:21:18.293809  581232 cri.go:89] found id: ""
	I1205 20:21:18.293898  581232 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:21:18.306100  581232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:21:18.317203  581232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:21:18.328481  581232 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:21:18.328507  581232 kubeadm.go:157] found existing configuration files:
	
	I1205 20:21:18.328576  581232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:21:18.339187  581232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:21:18.339281  581232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:21:18.349982  581232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:21:18.360102  581232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:21:18.360185  581232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:21:18.370950  581232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:21:18.380781  581232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:21:18.380860  581232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:21:18.391326  581232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:21:18.401434  581232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:21:18.401506  581232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:21:18.412281  581232 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:21:18.538345  581232 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 20:21:18.538522  581232 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:21:18.711502  581232 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:21:18.711671  581232 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:21:18.711826  581232 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:21:18.939597  581232 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:21:18.941541  581232 out.go:235]   - Generating certificates and keys ...
	I1205 20:21:18.941649  581232 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:21:18.941750  581232 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:21:19.209460  581232 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 20:21:19.828183  581232 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1205 20:21:20.001872  581232 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1205 20:21:20.184883  581232 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1205 20:21:20.484359  581232 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1205 20:21:20.484615  581232 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-386085] and IPs [192.168.72.144 127.0.0.1 ::1]
	I1205 20:21:20.596214  581232 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1205 20:21:20.596437  581232 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-386085] and IPs [192.168.72.144 127.0.0.1 ::1]
	I1205 20:21:20.863231  581232 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 20:21:21.167752  581232 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 20:21:21.260475  581232 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1205 20:21:21.260585  581232 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:21:21.617603  581232 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:21:21.682487  581232 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:21:21.851457  581232 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:21:22.102212  581232 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:21:22.121499  581232 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:21:22.123402  581232 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:21:22.123474  581232 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:21:22.332337  581232 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:21:22.334071  581232 out.go:235]   - Booting up control plane ...
	I1205 20:21:22.334225  581232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:21:22.351953  581232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:21:22.353215  581232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:21:22.354180  581232 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:21:22.360481  581232 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:22:02.346428  581232 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 20:22:02.347956  581232 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:22:02.348255  581232 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:22:07.347776  581232 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:22:07.348016  581232 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:22:17.347537  581232 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:22:17.347845  581232 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:22:37.347497  581232 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:22:37.347737  581232 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:23:17.348854  581232 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:23:17.349401  581232 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:23:17.349435  581232 kubeadm.go:310] 
	I1205 20:23:17.349530  581232 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 20:23:17.349622  581232 kubeadm.go:310] 		timed out waiting for the condition
	I1205 20:23:17.349643  581232 kubeadm.go:310] 
	I1205 20:23:17.349714  581232 kubeadm.go:310] 	This error is likely caused by:
	I1205 20:23:17.349802  581232 kubeadm.go:310] 		- The kubelet is not running
	I1205 20:23:17.350041  581232 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 20:23:17.350069  581232 kubeadm.go:310] 
	I1205 20:23:17.350313  581232 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 20:23:17.350384  581232 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 20:23:17.350481  581232 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 20:23:17.350502  581232 kubeadm.go:310] 
	I1205 20:23:17.350743  581232 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 20:23:17.350918  581232 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 20:23:17.350928  581232 kubeadm.go:310] 
	I1205 20:23:17.351181  581232 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 20:23:17.351428  581232 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 20:23:17.351608  581232 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 20:23:17.351751  581232 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 20:23:17.351815  581232 kubeadm.go:310] 
	I1205 20:23:17.352288  581232 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:23:17.352461  581232 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 20:23:17.352636  581232 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1205 20:23:17.353053  581232 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-386085] and IPs [192.168.72.144 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-386085] and IPs [192.168.72.144 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-386085] and IPs [192.168.72.144 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-386085] and IPs [192.168.72.144 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 20:23:17.353131  581232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:23:19.019555  581232 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.666390057s)
	I1205 20:23:19.019634  581232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:23:19.035538  581232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:23:19.046725  581232 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:23:19.046750  581232 kubeadm.go:157] found existing configuration files:
	
	I1205 20:23:19.046814  581232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:23:19.057033  581232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:23:19.057128  581232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:23:19.067773  581232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:23:19.077770  581232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:23:19.077831  581232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:23:19.090277  581232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:23:19.101504  581232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:23:19.101628  581232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:23:19.117096  581232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:23:19.131134  581232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:23:19.131206  581232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:23:19.144705  581232 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:23:19.224193  581232 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 20:23:19.224288  581232 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:23:19.395621  581232 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:23:19.395802  581232 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:23:19.395949  581232 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:23:19.596229  581232 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:23:19.598224  581232 out.go:235]   - Generating certificates and keys ...
	I1205 20:23:19.598352  581232 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:23:19.598425  581232 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:23:19.598518  581232 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:23:19.598566  581232 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:23:19.598773  581232 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:23:19.598877  581232 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 20:23:19.599301  581232 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:23:19.599717  581232 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:23:19.600094  581232 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:23:19.602826  581232 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:23:19.602895  581232 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 20:23:19.603023  581232 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:23:19.827976  581232 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:23:19.994821  581232 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:23:20.150754  581232 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:23:20.255239  581232 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:23:20.270715  581232 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:23:20.271942  581232 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:23:20.272006  581232 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:23:20.403715  581232 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:23:20.405645  581232 out.go:235]   - Booting up control plane ...
	I1205 20:23:20.405776  581232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:23:20.409976  581232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:23:20.411170  581232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:23:20.412004  581232 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:23:20.422081  581232 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:24:00.421650  581232 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 20:24:00.422035  581232 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:24:00.422230  581232 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:24:05.422552  581232 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:24:05.422791  581232 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:24:15.422981  581232 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:24:15.423243  581232 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:24:35.423797  581232 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:24:35.424096  581232 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:25:15.425352  581232 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:25:15.425642  581232 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:25:15.425675  581232 kubeadm.go:310] 
	I1205 20:25:15.425738  581232 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 20:25:15.425781  581232 kubeadm.go:310] 		timed out waiting for the condition
	I1205 20:25:15.425788  581232 kubeadm.go:310] 
	I1205 20:25:15.425828  581232 kubeadm.go:310] 	This error is likely caused by:
	I1205 20:25:15.425878  581232 kubeadm.go:310] 		- The kubelet is not running
	I1205 20:25:15.426060  581232 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 20:25:15.426091  581232 kubeadm.go:310] 
	I1205 20:25:15.426234  581232 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 20:25:15.426283  581232 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 20:25:15.426338  581232 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 20:25:15.426348  581232 kubeadm.go:310] 
	I1205 20:25:15.426476  581232 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 20:25:15.426595  581232 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 20:25:15.426608  581232 kubeadm.go:310] 
	I1205 20:25:15.426747  581232 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 20:25:15.426867  581232 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 20:25:15.426974  581232 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 20:25:15.427042  581232 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 20:25:15.427054  581232 kubeadm.go:310] 
	I1205 20:25:15.428366  581232 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:25:15.428484  581232 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 20:25:15.428603  581232 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1205 20:25:15.428667  581232 kubeadm.go:394] duration metric: took 3m57.183276784s to StartCluster
	I1205 20:25:15.428748  581232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:25:15.428819  581232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:25:15.493630  581232 cri.go:89] found id: ""
	I1205 20:25:15.493688  581232 logs.go:282] 0 containers: []
	W1205 20:25:15.493702  581232 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:25:15.493711  581232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:25:15.493777  581232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:25:15.531396  581232 cri.go:89] found id: ""
	I1205 20:25:15.531436  581232 logs.go:282] 0 containers: []
	W1205 20:25:15.531450  581232 logs.go:284] No container was found matching "etcd"
	I1205 20:25:15.531457  581232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:25:15.531534  581232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:25:15.573060  581232 cri.go:89] found id: ""
	I1205 20:25:15.573092  581232 logs.go:282] 0 containers: []
	W1205 20:25:15.573101  581232 logs.go:284] No container was found matching "coredns"
	I1205 20:25:15.573112  581232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:25:15.573172  581232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:25:15.616170  581232 cri.go:89] found id: ""
	I1205 20:25:15.616200  581232 logs.go:282] 0 containers: []
	W1205 20:25:15.616209  581232 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:25:15.616216  581232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:25:15.616311  581232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:25:15.653235  581232 cri.go:89] found id: ""
	I1205 20:25:15.653277  581232 logs.go:282] 0 containers: []
	W1205 20:25:15.653287  581232 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:25:15.653293  581232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:25:15.653347  581232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:25:15.689614  581232 cri.go:89] found id: ""
	I1205 20:25:15.689644  581232 logs.go:282] 0 containers: []
	W1205 20:25:15.689656  581232 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:25:15.689664  581232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:25:15.689732  581232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:25:15.730024  581232 cri.go:89] found id: ""
	I1205 20:25:15.730055  581232 logs.go:282] 0 containers: []
	W1205 20:25:15.730064  581232 logs.go:284] No container was found matching "kindnet"
	I1205 20:25:15.730075  581232 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:25:15.730088  581232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:25:15.831206  581232 logs.go:123] Gathering logs for container status ...
	I1205 20:25:15.831251  581232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:25:15.870955  581232 logs.go:123] Gathering logs for kubelet ...
	I1205 20:25:15.870994  581232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:25:15.924039  581232 logs.go:123] Gathering logs for dmesg ...
	I1205 20:25:15.924086  581232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:25:15.938761  581232 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:25:15.938795  581232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:25:16.070488  581232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1205 20:25:16.070515  581232 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1205 20:25:16.070563  581232 out.go:270] * 
	* 
	W1205 20:25:16.070629  581232 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 20:25:16.070648  581232 out.go:270] * 
	* 
	W1205 20:25:16.071558  581232 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 20:25:16.075039  581232 out.go:201] 
	W1205 20:25:16.076470  581232 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 20:25:16.076530  581232 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 20:25:16.076559  581232 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 20:25:16.078058  581232 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-386085 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386085 -n old-k8s-version-386085
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386085 -n old-k8s-version-386085: exit status 6 (240.568921ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:25:16.378922  584540 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-386085" does not appear in /home/jenkins/minikube-integration/20052-530897/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-386085" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (332.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-816185 --alsologtostderr -v=3
E1205 20:23:15.012686  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-816185 --alsologtostderr -v=3: exit status 82 (2m0.529002054s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-816185"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:23:14.489581  583528 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:23:14.489697  583528 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:23:14.489705  583528 out.go:358] Setting ErrFile to fd 2...
	I1205 20:23:14.489709  583528 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:23:14.489875  583528 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 20:23:14.490111  583528 out.go:352] Setting JSON to false
	I1205 20:23:14.490191  583528 mustload.go:65] Loading cluster: no-preload-816185
	I1205 20:23:14.490552  583528 config.go:182] Loaded profile config "no-preload-816185": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:23:14.490624  583528 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/config.json ...
	I1205 20:23:14.490789  583528 mustload.go:65] Loading cluster: no-preload-816185
	I1205 20:23:14.490889  583528 config.go:182] Loaded profile config "no-preload-816185": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:23:14.490915  583528 stop.go:39] StopHost: no-preload-816185
	I1205 20:23:14.491275  583528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:23:14.491327  583528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:23:14.506780  583528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37759
	I1205 20:23:14.507352  583528 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:23:14.508054  583528 main.go:141] libmachine: Using API Version  1
	I1205 20:23:14.508092  583528 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:23:14.508509  583528 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:23:14.510994  583528 out.go:177] * Stopping node "no-preload-816185"  ...
	I1205 20:23:14.512321  583528 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1205 20:23:14.512356  583528 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:23:14.512578  583528 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1205 20:23:14.512604  583528 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:23:14.515891  583528 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:23:14.516376  583528 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:21:35 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:23:14.516418  583528 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:23:14.516558  583528 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:23:14.516751  583528 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:23:14.516915  583528 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:23:14.517097  583528 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:23:14.619611  583528 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1205 20:23:14.677881  583528 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1205 20:23:14.738330  583528 main.go:141] libmachine: Stopping "no-preload-816185"...
	I1205 20:23:14.738393  583528 main.go:141] libmachine: (no-preload-816185) Calling .GetState
	I1205 20:23:14.740048  583528 main.go:141] libmachine: (no-preload-816185) Calling .Stop
	I1205 20:23:14.744394  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 0/120
	I1205 20:23:15.745969  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 1/120
	I1205 20:23:16.747179  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 2/120
	I1205 20:23:17.748709  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 3/120
	I1205 20:23:18.750883  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 4/120
	I1205 20:23:19.752427  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 5/120
	I1205 20:23:20.754230  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 6/120
	I1205 20:23:21.755681  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 7/120
	I1205 20:23:22.757131  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 8/120
	I1205 20:23:23.758935  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 9/120
	I1205 20:23:24.760361  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 10/120
	I1205 20:23:25.761805  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 11/120
	I1205 20:23:26.763392  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 12/120
	I1205 20:23:27.765253  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 13/120
	I1205 20:23:28.766808  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 14/120
	I1205 20:23:29.769151  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 15/120
	I1205 20:23:30.770757  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 16/120
	I1205 20:23:31.772366  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 17/120
	I1205 20:23:32.774167  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 18/120
	I1205 20:23:33.775735  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 19/120
	I1205 20:23:34.778184  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 20/120
	I1205 20:23:35.779822  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 21/120
	I1205 20:23:36.781311  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 22/120
	I1205 20:23:37.783702  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 23/120
	I1205 20:23:38.785049  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 24/120
	I1205 20:23:39.787390  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 25/120
	I1205 20:23:40.789512  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 26/120
	I1205 20:23:41.791119  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 27/120
	I1205 20:23:42.792759  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 28/120
	I1205 20:23:43.795057  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 29/120
	I1205 20:23:44.797620  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 30/120
	I1205 20:23:45.799299  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 31/120
	I1205 20:23:46.800942  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 32/120
	I1205 20:23:47.802491  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 33/120
	I1205 20:23:48.804383  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 34/120
	I1205 20:23:49.806586  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 35/120
	I1205 20:23:50.808252  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 36/120
	I1205 20:23:51.809894  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 37/120
	I1205 20:23:52.811274  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 38/120
	I1205 20:23:53.814339  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 39/120
	I1205 20:23:54.816672  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 40/120
	I1205 20:23:55.819056  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 41/120
	I1205 20:23:56.820641  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 42/120
	I1205 20:23:57.822912  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 43/120
	I1205 20:23:58.824360  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 44/120
	I1205 20:23:59.826811  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 45/120
	I1205 20:24:00.828379  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 46/120
	I1205 20:24:01.830061  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 47/120
	I1205 20:24:02.831699  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 48/120
	I1205 20:24:03.833388  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 49/120
	I1205 20:24:04.834939  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 50/120
	I1205 20:24:05.836487  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 51/120
	I1205 20:24:06.838004  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 52/120
	I1205 20:24:07.839447  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 53/120
	I1205 20:24:08.840855  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 54/120
	I1205 20:24:09.843085  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 55/120
	I1205 20:24:10.844661  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 56/120
	I1205 20:24:11.846723  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 57/120
	I1205 20:24:12.848254  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 58/120
	I1205 20:24:13.849549  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 59/120
	I1205 20:24:14.852109  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 60/120
	I1205 20:24:15.853483  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 61/120
	I1205 20:24:16.855362  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 62/120
	I1205 20:24:17.857302  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 63/120
	I1205 20:24:18.859138  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 64/120
	I1205 20:24:19.861531  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 65/120
	I1205 20:24:20.863085  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 66/120
	I1205 20:24:21.864767  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 67/120
	I1205 20:24:22.867082  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 68/120
	I1205 20:24:23.868424  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 69/120
	I1205 20:24:24.869818  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 70/120
	I1205 20:24:25.871166  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 71/120
	I1205 20:24:26.872888  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 72/120
	I1205 20:24:27.874790  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 73/120
	I1205 20:24:28.876603  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 74/120
	I1205 20:24:29.878834  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 75/120
	I1205 20:24:30.880116  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 76/120
	I1205 20:24:31.881684  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 77/120
	I1205 20:24:32.882874  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 78/120
	I1205 20:24:33.884615  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 79/120
	I1205 20:24:34.887130  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 80/120
	I1205 20:24:35.888861  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 81/120
	I1205 20:24:36.890800  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 82/120
	I1205 20:24:37.892828  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 83/120
	I1205 20:24:38.894683  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 84/120
	I1205 20:24:39.897087  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 85/120
	I1205 20:24:40.898869  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 86/120
	I1205 20:24:41.900262  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 87/120
	I1205 20:24:42.901848  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 88/120
	I1205 20:24:43.903161  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 89/120
	I1205 20:24:44.905461  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 90/120
	I1205 20:24:45.908059  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 91/120
	I1205 20:24:46.910518  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 92/120
	I1205 20:24:47.912076  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 93/120
	I1205 20:24:48.913606  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 94/120
	I1205 20:24:49.915880  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 95/120
	I1205 20:24:50.917326  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 96/120
	I1205 20:24:51.918640  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 97/120
	I1205 20:24:52.919984  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 98/120
	I1205 20:24:53.921514  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 99/120
	I1205 20:24:54.923923  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 100/120
	I1205 20:24:55.925281  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 101/120
	I1205 20:24:56.926738  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 102/120
	I1205 20:24:57.928109  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 103/120
	I1205 20:24:58.929768  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 104/120
	I1205 20:24:59.931879  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 105/120
	I1205 20:25:00.933403  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 106/120
	I1205 20:25:01.934780  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 107/120
	I1205 20:25:02.936313  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 108/120
	I1205 20:25:03.937952  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 109/120
	I1205 20:25:04.940123  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 110/120
	I1205 20:25:05.941582  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 111/120
	I1205 20:25:06.943089  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 112/120
	I1205 20:25:07.944660  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 113/120
	I1205 20:25:08.946280  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 114/120
	I1205 20:25:09.948614  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 115/120
	I1205 20:25:10.949913  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 116/120
	I1205 20:25:11.951261  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 117/120
	I1205 20:25:12.953108  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 118/120
	I1205 20:25:13.954834  583528 main.go:141] libmachine: (no-preload-816185) Waiting for machine to stop 119/120
	I1205 20:25:14.956010  583528 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1205 20:25:14.956104  583528 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1205 20:25:14.957939  583528 out.go:201] 
	W1205 20:25:14.959665  583528 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1205 20:25:14.959686  583528 out.go:270] * 
	* 
	W1205 20:25:14.963332  583528 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 20:25:14.965032  583528 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-816185 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-816185 -n no-preload-816185
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-816185 -n no-preload-816185: exit status 3 (18.646024258s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:25:33.612672  584507 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.37:22: connect: no route to host
	E1205 20:25:33.612696  584507 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.37:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-816185" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-789000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-789000 --alsologtostderr -v=3: exit status 82 (2m0.573330905s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-789000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:23:19.256538  583612 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:23:19.256734  583612 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:23:19.256748  583612 out.go:358] Setting ErrFile to fd 2...
	I1205 20:23:19.256755  583612 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:23:19.257108  583612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 20:23:19.257486  583612 out.go:352] Setting JSON to false
	I1205 20:23:19.257609  583612 mustload.go:65] Loading cluster: embed-certs-789000
	I1205 20:23:19.258146  583612 config.go:182] Loaded profile config "embed-certs-789000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:23:19.258306  583612 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/config.json ...
	I1205 20:23:19.258567  583612 mustload.go:65] Loading cluster: embed-certs-789000
	I1205 20:23:19.258730  583612 config.go:182] Loaded profile config "embed-certs-789000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:23:19.258790  583612 stop.go:39] StopHost: embed-certs-789000
	I1205 20:23:19.259411  583612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:23:19.259476  583612 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:23:19.274891  583612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41239
	I1205 20:23:19.275394  583612 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:23:19.275999  583612 main.go:141] libmachine: Using API Version  1
	I1205 20:23:19.276027  583612 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:23:19.276449  583612 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:23:19.278763  583612 out.go:177] * Stopping node "embed-certs-789000"  ...
	I1205 20:23:19.279959  583612 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1205 20:23:19.280007  583612 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:23:19.280382  583612 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1205 20:23:19.280423  583612 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:23:19.283797  583612 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:23:19.284373  583612 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:22:00 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:23:19.284423  583612 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:23:19.284780  583612 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:23:19.285109  583612 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:23:19.285328  583612 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:23:19.285546  583612 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:23:19.408179  583612 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1205 20:23:19.470677  583612 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1205 20:23:19.536408  583612 main.go:141] libmachine: Stopping "embed-certs-789000"...
	I1205 20:23:19.536458  583612 main.go:141] libmachine: (embed-certs-789000) Calling .GetState
	I1205 20:23:19.538523  583612 main.go:141] libmachine: (embed-certs-789000) Calling .Stop
	I1205 20:23:19.543313  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 0/120
	I1205 20:23:20.544642  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 1/120
	I1205 20:23:21.545811  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 2/120
	I1205 20:23:22.547128  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 3/120
	I1205 20:23:23.548491  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 4/120
	I1205 20:23:24.550719  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 5/120
	I1205 20:23:25.552179  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 6/120
	I1205 20:23:26.553500  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 7/120
	I1205 20:23:27.555317  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 8/120
	I1205 20:23:28.557114  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 9/120
	I1205 20:23:29.558403  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 10/120
	I1205 20:23:30.559863  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 11/120
	I1205 20:23:31.561143  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 12/120
	I1205 20:23:32.562898  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 13/120
	I1205 20:23:33.564375  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 14/120
	I1205 20:23:34.566566  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 15/120
	I1205 20:23:35.567814  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 16/120
	I1205 20:23:36.569418  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 17/120
	I1205 20:23:37.570878  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 18/120
	I1205 20:23:38.572453  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 19/120
	I1205 20:23:39.574673  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 20/120
	I1205 20:23:40.576127  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 21/120
	I1205 20:23:41.577737  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 22/120
	I1205 20:23:42.579610  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 23/120
	I1205 20:23:43.581312  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 24/120
	I1205 20:23:44.583658  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 25/120
	I1205 20:23:45.585375  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 26/120
	I1205 20:23:46.587259  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 27/120
	I1205 20:23:47.589623  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 28/120
	I1205 20:23:48.591069  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 29/120
	I1205 20:23:49.593092  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 30/120
	I1205 20:23:50.594916  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 31/120
	I1205 20:23:51.596067  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 32/120
	I1205 20:23:52.597579  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 33/120
	I1205 20:23:53.599373  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 34/120
	I1205 20:23:54.601565  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 35/120
	I1205 20:23:55.603144  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 36/120
	I1205 20:23:56.605079  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 37/120
	I1205 20:23:57.606689  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 38/120
	I1205 20:23:58.608157  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 39/120
	I1205 20:23:59.609382  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 40/120
	I1205 20:24:00.611046  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 41/120
	I1205 20:24:01.612581  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 42/120
	I1205 20:24:02.615095  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 43/120
	I1205 20:24:03.616550  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 44/120
	I1205 20:24:04.618784  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 45/120
	I1205 20:24:05.620167  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 46/120
	I1205 20:24:06.621675  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 47/120
	I1205 20:24:07.623077  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 48/120
	I1205 20:24:08.624614  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 49/120
	I1205 20:24:09.626964  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 50/120
	I1205 20:24:10.628721  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 51/120
	I1205 20:24:11.630212  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 52/120
	I1205 20:24:12.632109  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 53/120
	I1205 20:24:13.634203  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 54/120
	I1205 20:24:14.635749  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 55/120
	I1205 20:24:15.637440  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 56/120
	I1205 20:24:16.638947  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 57/120
	I1205 20:24:17.640724  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 58/120
	I1205 20:24:18.643439  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 59/120
	I1205 20:24:19.645949  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 60/120
	I1205 20:24:20.648024  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 61/120
	I1205 20:24:21.649853  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 62/120
	I1205 20:24:22.651373  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 63/120
	I1205 20:24:23.652863  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 64/120
	I1205 20:24:24.655162  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 65/120
	I1205 20:24:25.657711  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 66/120
	I1205 20:24:26.659218  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 67/120
	I1205 20:24:27.661060  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 68/120
	I1205 20:24:28.662758  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 69/120
	I1205 20:24:29.664382  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 70/120
	I1205 20:24:30.666013  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 71/120
	I1205 20:24:31.668349  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 72/120
	I1205 20:24:32.669764  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 73/120
	I1205 20:24:33.671259  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 74/120
	I1205 20:24:34.672901  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 75/120
	I1205 20:24:35.674337  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 76/120
	I1205 20:24:36.676124  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 77/120
	I1205 20:24:37.678303  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 78/120
	I1205 20:24:38.679883  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 79/120
	I1205 20:24:39.682594  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 80/120
	I1205 20:24:40.684036  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 81/120
	I1205 20:24:41.685809  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 82/120
	I1205 20:24:42.687279  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 83/120
	I1205 20:24:43.688823  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 84/120
	I1205 20:24:44.691011  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 85/120
	I1205 20:24:45.692634  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 86/120
	I1205 20:24:46.694457  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 87/120
	I1205 20:24:47.696931  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 88/120
	I1205 20:24:48.698678  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 89/120
	I1205 20:24:49.701193  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 90/120
	I1205 20:24:50.703069  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 91/120
	I1205 20:24:51.704640  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 92/120
	I1205 20:24:52.706112  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 93/120
	I1205 20:24:53.707622  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 94/120
	I1205 20:24:54.709716  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 95/120
	I1205 20:24:55.711281  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 96/120
	I1205 20:24:56.712843  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 97/120
	I1205 20:24:57.714525  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 98/120
	I1205 20:24:58.716075  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 99/120
	I1205 20:24:59.718397  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 100/120
	I1205 20:25:00.719884  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 101/120
	I1205 20:25:01.721881  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 102/120
	I1205 20:25:02.723400  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 103/120
	I1205 20:25:03.725058  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 104/120
	I1205 20:25:04.727124  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 105/120
	I1205 20:25:05.728877  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 106/120
	I1205 20:25:06.730790  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 107/120
	I1205 20:25:07.732388  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 108/120
	I1205 20:25:08.734457  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 109/120
	I1205 20:25:09.737071  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 110/120
	I1205 20:25:10.739309  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 111/120
	I1205 20:25:11.740871  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 112/120
	I1205 20:25:12.742608  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 113/120
	I1205 20:25:13.744245  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 114/120
	I1205 20:25:14.746613  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 115/120
	I1205 20:25:15.748589  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 116/120
	I1205 20:25:16.750913  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 117/120
	I1205 20:25:17.752550  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 118/120
	I1205 20:25:18.754752  583612 main.go:141] libmachine: (embed-certs-789000) Waiting for machine to stop 119/120
	I1205 20:25:19.755830  583612 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1205 20:25:19.755905  583612 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1205 20:25:19.757954  583612 out.go:201] 
	W1205 20:25:19.759475  583612 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1205 20:25:19.759492  583612 out.go:270] * 
	* 
	W1205 20:25:19.762698  583612 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 20:25:19.764522  583612 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-789000 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-789000 -n embed-certs-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-789000 -n embed-certs-789000: exit status 3 (18.452619573s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:25:38.220705  584686 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	E1205 20:25:38.220730  584686 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-789000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-386085 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-386085 create -f testdata/busybox.yaml: exit status 1 (48.474271ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-386085" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-386085 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386085 -n old-k8s-version-386085
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386085 -n old-k8s-version-386085: exit status 6 (236.845424ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:25:16.666050  584580 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-386085" does not appear in /home/jenkins/minikube-integration/20052-530897/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-386085" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386085 -n old-k8s-version-386085
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386085 -n old-k8s-version-386085: exit status 6 (233.48323ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:25:16.899894  584610 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-386085" does not appear in /home/jenkins/minikube-integration/20052-530897/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-386085" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (111.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-386085 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-386085 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m50.977304908s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-386085 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-386085 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-386085 describe deploy/metrics-server -n kube-system: exit status 1 (46.669748ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-386085" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-386085 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386085 -n old-k8s-version-386085
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386085 -n old-k8s-version-386085: exit status 6 (237.374305ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:27:08.160831  585485 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-386085" does not appear in /home/jenkins/minikube-integration/20052-530897/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-386085" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (111.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-942599 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-942599 --alsologtostderr -v=3: exit status 82 (2m0.549637204s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-942599"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:25:31.417797  584834 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:25:31.417950  584834 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:25:31.417961  584834 out.go:358] Setting ErrFile to fd 2...
	I1205 20:25:31.417968  584834 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:25:31.418176  584834 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 20:25:31.418491  584834 out.go:352] Setting JSON to false
	I1205 20:25:31.418603  584834 mustload.go:65] Loading cluster: default-k8s-diff-port-942599
	I1205 20:25:31.419066  584834 config.go:182] Loaded profile config "default-k8s-diff-port-942599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:25:31.419138  584834 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/config.json ...
	I1205 20:25:31.419327  584834 mustload.go:65] Loading cluster: default-k8s-diff-port-942599
	I1205 20:25:31.419482  584834 config.go:182] Loaded profile config "default-k8s-diff-port-942599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:25:31.419522  584834 stop.go:39] StopHost: default-k8s-diff-port-942599
	I1205 20:25:31.420162  584834 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:25:31.420219  584834 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:25:31.436072  584834 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33881
	I1205 20:25:31.436566  584834 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:25:31.437212  584834 main.go:141] libmachine: Using API Version  1
	I1205 20:25:31.437235  584834 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:25:31.437621  584834 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:25:31.439900  584834 out.go:177] * Stopping node "default-k8s-diff-port-942599"  ...
	I1205 20:25:31.441660  584834 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1205 20:25:31.441699  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:25:31.441996  584834 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1205 20:25:31.442043  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:25:31.445007  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:25:31.445579  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:24:08 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:25:31.445608  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:25:31.445757  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:25:31.445949  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:25:31.446122  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:25:31.446291  584834 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:25:31.554929  584834 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1205 20:25:31.614807  584834 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1205 20:25:31.690465  584834 main.go:141] libmachine: Stopping "default-k8s-diff-port-942599"...
	I1205 20:25:31.690586  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetState
	I1205 20:25:31.692310  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Stop
	I1205 20:25:31.696322  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 0/120
	I1205 20:25:32.698110  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 1/120
	I1205 20:25:33.699547  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 2/120
	I1205 20:25:34.701158  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 3/120
	I1205 20:25:35.702655  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 4/120
	I1205 20:25:36.704993  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 5/120
	I1205 20:25:37.706908  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 6/120
	I1205 20:25:38.708488  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 7/120
	I1205 20:25:39.710041  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 8/120
	I1205 20:25:40.711482  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 9/120
	I1205 20:25:41.712864  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 10/120
	I1205 20:25:42.714236  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 11/120
	I1205 20:25:43.715652  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 12/120
	I1205 20:25:44.717096  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 13/120
	I1205 20:25:45.718822  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 14/120
	I1205 20:25:46.721356  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 15/120
	I1205 20:25:47.722947  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 16/120
	I1205 20:25:48.724466  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 17/120
	I1205 20:25:49.726043  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 18/120
	I1205 20:25:50.727649  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 19/120
	I1205 20:25:51.730280  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 20/120
	I1205 20:25:52.732255  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 21/120
	I1205 20:25:53.733915  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 22/120
	I1205 20:25:54.735832  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 23/120
	I1205 20:25:55.737555  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 24/120
	I1205 20:25:56.740012  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 25/120
	I1205 20:25:57.741738  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 26/120
	I1205 20:25:58.743673  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 27/120
	I1205 20:25:59.745299  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 28/120
	I1205 20:26:00.746841  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 29/120
	I1205 20:26:01.749418  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 30/120
	I1205 20:26:02.751137  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 31/120
	I1205 20:26:03.752819  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 32/120
	I1205 20:26:04.754339  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 33/120
	I1205 20:26:05.755722  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 34/120
	I1205 20:26:06.758155  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 35/120
	I1205 20:26:07.760218  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 36/120
	I1205 20:26:08.762045  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 37/120
	I1205 20:26:09.763771  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 38/120
	I1205 20:26:10.765553  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 39/120
	I1205 20:26:11.767038  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 40/120
	I1205 20:26:12.768612  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 41/120
	I1205 20:26:13.770334  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 42/120
	I1205 20:26:14.771923  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 43/120
	I1205 20:26:15.773407  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 44/120
	I1205 20:26:16.775677  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 45/120
	I1205 20:26:17.777372  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 46/120
	I1205 20:26:18.779100  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 47/120
	I1205 20:26:19.780658  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 48/120
	I1205 20:26:20.782463  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 49/120
	I1205 20:26:21.785089  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 50/120
	I1205 20:26:22.786810  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 51/120
	I1205 20:26:23.788655  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 52/120
	I1205 20:26:24.790601  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 53/120
	I1205 20:26:25.792448  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 54/120
	I1205 20:26:26.794686  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 55/120
	I1205 20:26:27.796384  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 56/120
	I1205 20:26:28.798144  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 57/120
	I1205 20:26:29.799702  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 58/120
	I1205 20:26:30.801430  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 59/120
	I1205 20:26:31.802909  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 60/120
	I1205 20:26:32.804626  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 61/120
	I1205 20:26:33.806665  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 62/120
	I1205 20:26:34.808168  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 63/120
	I1205 20:26:35.810006  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 64/120
	I1205 20:26:36.812545  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 65/120
	I1205 20:26:37.814240  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 66/120
	I1205 20:26:38.815973  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 67/120
	I1205 20:26:39.817732  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 68/120
	I1205 20:26:40.819309  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 69/120
	I1205 20:26:41.821171  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 70/120
	I1205 20:26:42.822671  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 71/120
	I1205 20:26:43.824373  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 72/120
	I1205 20:26:44.826012  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 73/120
	I1205 20:26:45.827894  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 74/120
	I1205 20:26:46.830281  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 75/120
	I1205 20:26:47.832222  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 76/120
	I1205 20:26:48.833953  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 77/120
	I1205 20:26:49.835540  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 78/120
	I1205 20:26:50.837108  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 79/120
	I1205 20:26:51.839623  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 80/120
	I1205 20:26:52.841226  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 81/120
	I1205 20:26:53.842846  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 82/120
	I1205 20:26:54.844182  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 83/120
	I1205 20:26:55.845822  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 84/120
	I1205 20:26:56.848021  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 85/120
	I1205 20:26:57.849560  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 86/120
	I1205 20:26:58.851144  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 87/120
	I1205 20:26:59.852743  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 88/120
	I1205 20:27:00.854175  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 89/120
	I1205 20:27:01.855560  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 90/120
	I1205 20:27:02.857123  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 91/120
	I1205 20:27:03.858501  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 92/120
	I1205 20:27:04.859799  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 93/120
	I1205 20:27:05.861355  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 94/120
	I1205 20:27:06.863456  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 95/120
	I1205 20:27:07.865194  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 96/120
	I1205 20:27:08.866661  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 97/120
	I1205 20:27:09.868236  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 98/120
	I1205 20:27:10.869910  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 99/120
	I1205 20:27:11.872229  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 100/120
	I1205 20:27:12.873873  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 101/120
	I1205 20:27:13.875427  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 102/120
	I1205 20:27:14.876814  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 103/120
	I1205 20:27:15.878428  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 104/120
	I1205 20:27:16.880702  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 105/120
	I1205 20:27:17.882262  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 106/120
	I1205 20:27:18.883708  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 107/120
	I1205 20:27:19.885346  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 108/120
	I1205 20:27:20.886895  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 109/120
	I1205 20:27:21.888297  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 110/120
	I1205 20:27:22.889863  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 111/120
	I1205 20:27:23.891365  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 112/120
	I1205 20:27:24.893071  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 113/120
	I1205 20:27:25.894589  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 114/120
	I1205 20:27:26.896816  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 115/120
	I1205 20:27:27.898314  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 116/120
	I1205 20:27:28.899712  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 117/120
	I1205 20:27:29.901416  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 118/120
	I1205 20:27:30.903032  584834 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for machine to stop 119/120
	I1205 20:27:31.904457  584834 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1205 20:27:31.904532  584834 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1205 20:27:31.906808  584834 out.go:201] 
	W1205 20:27:31.908496  584834 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1205 20:27:31.908512  584834 out.go:270] * 
	* 
	W1205 20:27:31.912074  584834 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 20:27:31.913610  584834 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-942599 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-942599 -n default-k8s-diff-port-942599
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-942599 -n default-k8s-diff-port-942599: exit status 3 (18.657129627s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:27:50.572677  585701 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.96:22: connect: no route to host
	E1205 20:27:50.572699  585701 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.96:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-942599" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-816185 -n no-preload-816185
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-816185 -n no-preload-816185: exit status 3 (3.168003377s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:25:36.780725  584868 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.37:22: connect: no route to host
	E1205 20:25:36.780750  584868 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.37:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-816185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-816185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154163141s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.37:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-816185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-816185 -n no-preload-816185
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-816185 -n no-preload-816185: exit status 3 (3.061210168s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:25:45.996724  584996 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.37:22: connect: no route to host
	E1205 20:25:45.996753  584996 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.37:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-816185" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-789000 -n embed-certs-789000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-789000 -n embed-certs-789000: exit status 3 (3.168050625s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:25:41.388644  584932 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	E1205 20:25:41.388667  584932 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-789000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-789000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153943762s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-789000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-789000 -n embed-certs-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-789000 -n embed-certs-789000: exit status 3 (3.061661388s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:25:50.604725  585082 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	E1205 20:25:50.604758  585082 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-789000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (726.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-386085 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-386085 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m2.839893899s)

                                                
                                                
-- stdout --
	* [old-k8s-version-386085] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20052
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-386085" primary control-plane node in "old-k8s-version-386085" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-386085" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:27:11.854509  585602 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:27:11.854619  585602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:27:11.854624  585602 out.go:358] Setting ErrFile to fd 2...
	I1205 20:27:11.854628  585602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:27:11.854797  585602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 20:27:11.855353  585602 out.go:352] Setting JSON to false
	I1205 20:27:11.856419  585602 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":11378,"bootTime":1733419054,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:27:11.856534  585602 start.go:139] virtualization: kvm guest
	I1205 20:27:11.859772  585602 out.go:177] * [old-k8s-version-386085] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:27:11.861172  585602 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 20:27:11.861228  585602 notify.go:220] Checking for updates...
	I1205 20:27:11.863669  585602 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:27:11.865118  585602 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:27:11.866403  585602 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 20:27:11.867858  585602 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:27:11.869310  585602 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:27:11.871347  585602 config.go:182] Loaded profile config "old-k8s-version-386085": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 20:27:11.871938  585602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:27:11.872013  585602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:27:11.887342  585602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43283
	I1205 20:27:11.887933  585602 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:27:11.888587  585602 main.go:141] libmachine: Using API Version  1
	I1205 20:27:11.888610  585602 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:27:11.888905  585602 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:27:11.889088  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:27:11.891129  585602 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1205 20:27:11.892367  585602 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:27:11.892683  585602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:27:11.892723  585602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:27:11.908064  585602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40215
	I1205 20:27:11.908585  585602 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:27:11.909130  585602 main.go:141] libmachine: Using API Version  1
	I1205 20:27:11.909153  585602 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:27:11.909470  585602 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:27:11.909643  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:27:11.946389  585602 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 20:27:11.947635  585602 start.go:297] selected driver: kvm2
	I1205 20:27:11.947651  585602 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-386085 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386085 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:27:11.947796  585602 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:27:11.948586  585602 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:27:11.948677  585602 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20052-530897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:27:11.964203  585602 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 20:27:11.964645  585602 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:27:11.964682  585602 cni.go:84] Creating CNI manager for ""
	I1205 20:27:11.964725  585602 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:27:11.964762  585602 start.go:340] cluster config:
	{Name:old-k8s-version-386085 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386085 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:27:11.964867  585602 iso.go:125] acquiring lock: {Name:mk778929df466edaca8cb6d38427acedfae32b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:27:11.966606  585602 out.go:177] * Starting "old-k8s-version-386085" primary control-plane node in "old-k8s-version-386085" cluster
	I1205 20:27:11.967742  585602 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 20:27:11.967793  585602 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1205 20:27:11.967809  585602 cache.go:56] Caching tarball of preloaded images
	I1205 20:27:11.967922  585602 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:27:11.967937  585602 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1205 20:27:11.968082  585602 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/config.json ...
	I1205 20:27:11.968356  585602 start.go:360] acquireMachinesLock for old-k8s-version-386085: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:30:43.457332  585602 start.go:364] duration metric: took 3m31.488905557s to acquireMachinesLock for "old-k8s-version-386085"
	I1205 20:30:43.457418  585602 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:30:43.457427  585602 fix.go:54] fixHost starting: 
	I1205 20:30:43.457835  585602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:30:43.457891  585602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:30:43.474845  585602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33571
	I1205 20:30:43.475386  585602 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:30:43.475993  585602 main.go:141] libmachine: Using API Version  1
	I1205 20:30:43.476026  585602 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:30:43.476404  585602 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:30:43.476613  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:30:43.476778  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetState
	I1205 20:30:43.478300  585602 fix.go:112] recreateIfNeeded on old-k8s-version-386085: state=Stopped err=<nil>
	I1205 20:30:43.478329  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	W1205 20:30:43.478502  585602 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 20:30:43.480644  585602 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-386085" ...
	I1205 20:30:43.482307  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .Start
	I1205 20:30:43.482501  585602 main.go:141] libmachine: (old-k8s-version-386085) Ensuring networks are active...
	I1205 20:30:43.483222  585602 main.go:141] libmachine: (old-k8s-version-386085) Ensuring network default is active
	I1205 20:30:43.483574  585602 main.go:141] libmachine: (old-k8s-version-386085) Ensuring network mk-old-k8s-version-386085 is active
	I1205 20:30:43.484156  585602 main.go:141] libmachine: (old-k8s-version-386085) Getting domain xml...
	I1205 20:30:43.485045  585602 main.go:141] libmachine: (old-k8s-version-386085) Creating domain...
	I1205 20:30:44.770817  585602 main.go:141] libmachine: (old-k8s-version-386085) Waiting to get IP...
	I1205 20:30:44.772079  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:44.772538  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:44.772599  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:44.772517  586577 retry.go:31] will retry after 247.056435ms: waiting for machine to come up
	I1205 20:30:45.021096  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:45.021642  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:45.021678  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:45.021560  586577 retry.go:31] will retry after 241.543543ms: waiting for machine to come up
	I1205 20:30:45.265136  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:45.265654  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:45.265683  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:45.265596  586577 retry.go:31] will retry after 324.624293ms: waiting for machine to come up
	I1205 20:30:45.592067  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:45.592603  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:45.592636  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:45.592558  586577 retry.go:31] will retry after 408.275958ms: waiting for machine to come up
	I1205 20:30:46.002321  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:46.002872  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:46.002904  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:46.002808  586577 retry.go:31] will retry after 693.356488ms: waiting for machine to come up
	I1205 20:30:46.697505  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:46.697874  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:46.697900  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:46.697846  586577 retry.go:31] will retry after 906.807324ms: waiting for machine to come up
	I1205 20:30:47.606601  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:47.607065  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:47.607098  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:47.607001  586577 retry.go:31] will retry after 1.007867893s: waiting for machine to come up
	I1205 20:30:48.617140  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:48.617641  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:48.617674  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:48.617608  586577 retry.go:31] will retry after 1.15317606s: waiting for machine to come up
	I1205 20:30:49.773126  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:49.773670  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:49.773699  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:49.773620  586577 retry.go:31] will retry after 1.342422822s: waiting for machine to come up
	I1205 20:30:51.117592  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:51.118034  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:51.118065  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:51.117973  586577 retry.go:31] will retry after 1.575794078s: waiting for machine to come up
	I1205 20:30:52.695389  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:52.695838  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:52.695868  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:52.695784  586577 retry.go:31] will retry after 2.377931285s: waiting for machine to come up
	I1205 20:30:55.076859  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:55.077428  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:55.077469  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:55.077377  586577 retry.go:31] will retry after 2.586837249s: waiting for machine to come up
	I1205 20:30:57.667200  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:57.667644  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:57.667681  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:57.667592  586577 retry.go:31] will retry after 2.856276116s: waiting for machine to come up
	I1205 20:31:00.525334  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:00.525796  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:31:00.525830  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:31:00.525740  586577 retry.go:31] will retry after 5.119761936s: waiting for machine to come up
	I1205 20:31:05.646790  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.647230  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has current primary IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.647264  585602 main.go:141] libmachine: (old-k8s-version-386085) Found IP for machine: 192.168.72.144
	I1205 20:31:05.647278  585602 main.go:141] libmachine: (old-k8s-version-386085) Reserving static IP address...
	I1205 20:31:05.647796  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "old-k8s-version-386085", mac: "52:54:00:6a:06:a4", ip: "192.168.72.144"} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.647834  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | skip adding static IP to network mk-old-k8s-version-386085 - found existing host DHCP lease matching {name: "old-k8s-version-386085", mac: "52:54:00:6a:06:a4", ip: "192.168.72.144"}
	I1205 20:31:05.647856  585602 main.go:141] libmachine: (old-k8s-version-386085) Reserved static IP address: 192.168.72.144
	I1205 20:31:05.647872  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | Getting to WaitForSSH function...
	I1205 20:31:05.647889  585602 main.go:141] libmachine: (old-k8s-version-386085) Waiting for SSH to be available...
	I1205 20:31:05.650296  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.650610  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.650643  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.650742  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | Using SSH client type: external
	I1205 20:31:05.650779  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa (-rw-------)
	I1205 20:31:05.650816  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:31:05.650837  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | About to run SSH command:
	I1205 20:31:05.650851  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | exit 0
	I1205 20:31:05.776876  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | SSH cmd err, output: <nil>: 
	I1205 20:31:05.777311  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetConfigRaw
	I1205 20:31:05.777948  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:31:05.780609  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.781053  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.781091  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.781319  585602 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/config.json ...
	I1205 20:31:05.781585  585602 machine.go:93] provisionDockerMachine start ...
	I1205 20:31:05.781607  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:05.781942  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:05.784729  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.785155  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.785191  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.785326  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:05.785491  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:05.785659  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:05.785886  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:05.786078  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:05.786309  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:05.786323  585602 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:31:05.893034  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 20:31:05.893079  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetMachineName
	I1205 20:31:05.893388  585602 buildroot.go:166] provisioning hostname "old-k8s-version-386085"
	I1205 20:31:05.893426  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetMachineName
	I1205 20:31:05.893623  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:05.896484  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.896883  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.896910  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.897031  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:05.897252  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:05.897441  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:05.897615  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:05.897796  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:05.897965  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:05.897977  585602 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-386085 && echo "old-k8s-version-386085" | sudo tee /etc/hostname
	I1205 20:31:06.017910  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-386085
	
	I1205 20:31:06.017939  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.020956  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.021298  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.021332  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.021494  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.021678  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.021863  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.021995  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.022137  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:06.022325  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:06.022342  585602 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-386085' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-386085/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-386085' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:31:06.138200  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:31:06.138234  585602 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:31:06.138261  585602 buildroot.go:174] setting up certificates
	I1205 20:31:06.138274  585602 provision.go:84] configureAuth start
	I1205 20:31:06.138287  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetMachineName
	I1205 20:31:06.138588  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:31:06.141488  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.141909  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.141965  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.142096  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.144144  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.144720  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.144742  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.144951  585602 provision.go:143] copyHostCerts
	I1205 20:31:06.145020  585602 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:31:06.145031  585602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:31:06.145085  585602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:31:06.145206  585602 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:31:06.145219  585602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:31:06.145248  585602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:31:06.145335  585602 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:31:06.145346  585602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:31:06.145376  585602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:31:06.145452  585602 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-386085 san=[127.0.0.1 192.168.72.144 localhost minikube old-k8s-version-386085]
	I1205 20:31:06.276466  585602 provision.go:177] copyRemoteCerts
	I1205 20:31:06.276530  585602 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:31:06.276559  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.279218  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.279550  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.279578  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.279766  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.279990  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.280152  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.280317  585602 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:31:06.362479  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:31:06.387631  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1205 20:31:06.413110  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:31:06.437931  585602 provision.go:87] duration metric: took 299.641033ms to configureAuth
	I1205 20:31:06.437962  585602 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:31:06.438176  585602 config.go:182] Loaded profile config "old-k8s-version-386085": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 20:31:06.438272  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.441059  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.441413  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.441444  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.441655  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.441846  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.441992  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.442174  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.442379  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:06.442552  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:06.442568  585602 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:31:06.655666  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:31:06.655699  585602 machine.go:96] duration metric: took 874.099032ms to provisionDockerMachine
	I1205 20:31:06.655713  585602 start.go:293] postStartSetup for "old-k8s-version-386085" (driver="kvm2")
	I1205 20:31:06.655723  585602 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:31:06.655752  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.656082  585602 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:31:06.656115  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.658835  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.659178  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.659229  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.659378  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.659636  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.659808  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.659971  585602 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:31:06.744484  585602 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:31:06.749025  585602 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:31:06.749060  585602 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:31:06.749134  585602 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:31:06.749273  585602 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:31:06.749411  585602 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:31:06.760720  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:06.785449  585602 start.go:296] duration metric: took 129.720092ms for postStartSetup
	I1205 20:31:06.785500  585602 fix.go:56] duration metric: took 23.328073686s for fixHost
	I1205 20:31:06.785526  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.788417  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.788797  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.788828  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.789049  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.789296  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.789483  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.789688  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.789870  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:06.790046  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:06.790065  585602 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:31:06.897579  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430666.872047181
	
	I1205 20:31:06.897606  585602 fix.go:216] guest clock: 1733430666.872047181
	I1205 20:31:06.897615  585602 fix.go:229] Guest: 2024-12-05 20:31:06.872047181 +0000 UTC Remote: 2024-12-05 20:31:06.785506394 +0000 UTC m=+234.970971247 (delta=86.540787ms)
	I1205 20:31:06.897679  585602 fix.go:200] guest clock delta is within tolerance: 86.540787ms
	I1205 20:31:06.897691  585602 start.go:83] releasing machines lock for "old-k8s-version-386085", held for 23.440303187s
	I1205 20:31:06.897727  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.898085  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:31:06.901127  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.901530  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.901567  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.901719  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.902413  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.902626  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.902776  585602 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:31:06.902827  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.902878  585602 ssh_runner.go:195] Run: cat /version.json
	I1205 20:31:06.902903  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.905664  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.905912  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.906050  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.906086  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.906256  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.906341  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.906367  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.906411  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.906517  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.906613  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.906684  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.906837  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.906849  585602 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:31:06.907112  585602 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:31:06.986078  585602 ssh_runner.go:195] Run: systemctl --version
	I1205 20:31:07.009500  585602 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:31:07.159146  585602 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:31:07.166263  585602 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:31:07.166358  585602 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:31:07.186021  585602 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:31:07.186063  585602 start.go:495] detecting cgroup driver to use...
	I1205 20:31:07.186140  585602 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:31:07.205074  585602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:31:07.221207  585602 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:31:07.221268  585602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:31:07.236669  585602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:31:07.252848  585602 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:31:07.369389  585602 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:31:07.504993  585602 docker.go:233] disabling docker service ...
	I1205 20:31:07.505101  585602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:31:07.523294  585602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:31:07.538595  585602 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:31:07.687830  585602 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:31:07.816176  585602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:31:07.833624  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:31:07.853409  585602 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1205 20:31:07.853478  585602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:07.865346  585602 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:31:07.865426  585602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:07.877962  585602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:07.889255  585602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:07.901632  585602 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:31:07.916169  585602 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:31:07.927092  585602 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:31:07.927169  585602 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:31:07.942288  585602 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:31:07.953314  585602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:08.092156  585602 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:31:08.205715  585602 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:31:08.205799  585602 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:31:08.214280  585602 start.go:563] Will wait 60s for crictl version
	I1205 20:31:08.214351  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:08.220837  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:31:08.265983  585602 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:31:08.266065  585602 ssh_runner.go:195] Run: crio --version
	I1205 20:31:08.295839  585602 ssh_runner.go:195] Run: crio --version
	I1205 20:31:08.327805  585602 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1205 20:31:08.329278  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:31:08.332352  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:08.332700  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:08.332747  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:08.332930  585602 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1205 20:31:08.337611  585602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:08.350860  585602 kubeadm.go:883] updating cluster {Name:old-k8s-version-386085 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-386085 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:31:08.351016  585602 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 20:31:08.351090  585602 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:08.403640  585602 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 20:31:08.403716  585602 ssh_runner.go:195] Run: which lz4
	I1205 20:31:08.408211  585602 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 20:31:08.413136  585602 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:31:08.413168  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1205 20:31:10.209351  585602 crio.go:462] duration metric: took 1.801169802s to copy over tarball
	I1205 20:31:10.209438  585602 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:31:13.303553  585602 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.094044744s)
	I1205 20:31:13.303598  585602 crio.go:469] duration metric: took 3.094215888s to extract the tarball
	I1205 20:31:13.303610  585602 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:31:13.350989  585602 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:13.388660  585602 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 20:31:13.388702  585602 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 20:31:13.388814  585602 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:13.388822  585602 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.388832  585602 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.388853  585602 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.388881  585602 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.388904  585602 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1205 20:31:13.388823  585602 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.388859  585602 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.390414  585602 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1205 20:31:13.390924  585602 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.390941  585602 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.390924  585602 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.391016  585602 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.390927  585602 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.391373  585602 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.391378  585602 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:13.565006  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.577450  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1205 20:31:13.584653  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.597086  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.619848  585602 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1205 20:31:13.619899  585602 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.619955  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.623277  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.628407  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.697151  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.703111  585602 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1205 20:31:13.703167  585602 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1205 20:31:13.703219  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.736004  585602 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1205 20:31:13.736059  585602 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.736058  585602 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1205 20:31:13.736078  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.736094  585602 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.736104  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.736135  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.736187  585602 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1205 20:31:13.736207  585602 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.736235  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.783651  585602 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1205 20:31:13.783706  585602 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.783758  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.787597  585602 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1205 20:31:13.787649  585602 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.787656  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 20:31:13.787692  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.828445  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.828491  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.828544  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.828573  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.828616  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.828635  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.890937  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 20:31:13.992480  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.992480  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.992600  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.992661  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.992725  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.992780  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:14.095364  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 20:31:14.095462  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:14.163224  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1205 20:31:14.163320  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:14.163339  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:14.163420  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 20:31:14.163510  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:14.243805  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1205 20:31:14.243860  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1205 20:31:14.243881  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1205 20:31:14.287718  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1205 20:31:14.290994  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1205 20:31:14.291049  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1205 20:31:14.579648  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:14.728232  585602 cache_images.go:92] duration metric: took 1.339506459s to LoadCachedImages
	W1205 20:31:14.728389  585602 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I1205 20:31:14.728417  585602 kubeadm.go:934] updating node { 192.168.72.144 8443 v1.20.0 crio true true} ...
	I1205 20:31:14.728570  585602 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-386085 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386085 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:31:14.728672  585602 ssh_runner.go:195] Run: crio config
	I1205 20:31:14.778932  585602 cni.go:84] Creating CNI manager for ""
	I1205 20:31:14.778957  585602 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:31:14.778967  585602 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:31:14.778987  585602 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.144 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-386085 NodeName:old-k8s-version-386085 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1205 20:31:14.779131  585602 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-386085"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:31:14.779196  585602 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1205 20:31:14.792400  585602 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:31:14.792494  585602 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:31:14.802873  585602 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1205 20:31:14.821562  585602 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:31:14.839442  585602 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1205 20:31:14.861314  585602 ssh_runner.go:195] Run: grep 192.168.72.144	control-plane.minikube.internal$ /etc/hosts
	I1205 20:31:14.865457  585602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:14.878278  585602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:15.002193  585602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:31:15.030699  585602 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085 for IP: 192.168.72.144
	I1205 20:31:15.030734  585602 certs.go:194] generating shared ca certs ...
	I1205 20:31:15.030758  585602 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:31:15.030975  585602 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:31:15.031027  585602 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:31:15.031048  585602 certs.go:256] generating profile certs ...
	I1205 20:31:15.031206  585602 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.key
	I1205 20:31:15.031276  585602 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.key.87b35b18
	I1205 20:31:15.031324  585602 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.key
	I1205 20:31:15.031489  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:31:15.031535  585602 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:31:15.031550  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:31:15.031581  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:31:15.031612  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:31:15.031644  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:31:15.031698  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:15.032410  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:31:15.063090  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:31:15.094212  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:31:15.124685  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:31:15.159953  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1205 20:31:15.204250  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:31:15.237483  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:31:15.276431  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:31:15.303774  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:31:15.328872  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:31:15.353852  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:31:15.380916  585602 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:31:15.401082  585602 ssh_runner.go:195] Run: openssl version
	I1205 20:31:15.407442  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:31:15.420377  585602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:15.425721  585602 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:15.425800  585602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:15.432475  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:31:15.446140  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:31:15.459709  585602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:31:15.465165  585602 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:31:15.465241  585602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:31:15.471609  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:31:15.484139  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:31:15.496636  585602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:31:15.501575  585602 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:31:15.501634  585602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:31:15.507814  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:31:15.521234  585602 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:31:15.526452  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:31:15.532999  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:31:15.540680  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:31:15.547455  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:31:15.553996  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:31:15.560574  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:31:15.568489  585602 kubeadm.go:392] StartCluster: {Name:old-k8s-version-386085 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-386085 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:31:15.568602  585602 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:31:15.568682  585602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:31:15.610693  585602 cri.go:89] found id: ""
	I1205 20:31:15.610808  585602 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:31:15.622685  585602 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 20:31:15.622709  585602 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 20:31:15.622764  585602 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:31:15.633754  585602 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:31:15.634922  585602 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-386085" does not appear in /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:31:15.635682  585602 kubeconfig.go:62] /home/jenkins/minikube-integration/20052-530897/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-386085" cluster setting kubeconfig missing "old-k8s-version-386085" context setting]
	I1205 20:31:15.636878  585602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:31:15.719767  585602 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:31:15.731576  585602 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.144
	I1205 20:31:15.731622  585602 kubeadm.go:1160] stopping kube-system containers ...
	I1205 20:31:15.731639  585602 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:31:15.731705  585602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:31:15.777769  585602 cri.go:89] found id: ""
	I1205 20:31:15.777875  585602 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:31:15.797121  585602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:31:15.807961  585602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:31:15.807991  585602 kubeadm.go:157] found existing configuration files:
	
	I1205 20:31:15.808042  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:31:15.818177  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:31:15.818270  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:31:15.829092  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:31:15.839471  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:31:15.839564  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:31:15.850035  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:31:15.859907  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:31:15.859984  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:31:15.870882  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:31:15.881475  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:31:15.881549  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:31:15.892078  585602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:31:15.904312  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:16.042308  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:16.787487  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:17.036864  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:17.128855  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:17.219276  585602 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:31:17.219380  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:17.720206  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:18.219623  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:18.719555  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:19.219776  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:19.719967  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:20.219686  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:20.719806  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:21.219875  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:21.719915  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:22.219930  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:22.719848  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:23.219674  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:23.719903  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:24.220505  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:24.719726  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:25.220161  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:25.720115  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:26.220399  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:26.719567  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:27.220124  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:27.719460  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:28.220187  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:28.719599  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:29.219672  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:29.720450  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:30.220436  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:30.719573  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:31.220357  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:31.720052  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:32.220318  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:32.719780  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:33.220114  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:33.719554  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:34.220187  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:34.720021  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:35.219461  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:35.720334  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.219480  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.720159  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:37.219933  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:37.720360  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:38.219574  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:38.720034  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:39.219449  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:39.719752  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:40.219718  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:40.719771  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:41.219548  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:41.720381  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:42.220435  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:42.720366  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:43.219567  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:43.719652  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:44.220259  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:44.719556  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:45.219850  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:45.720302  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:46.220377  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:46.720107  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:47.219913  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:47.720441  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:48.220220  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:48.719997  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:49.219843  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:49.719591  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:50.220132  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:50.719528  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:51.219674  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:51.720234  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:52.219602  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:52.719522  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:53.220117  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:53.720426  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:54.220177  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:54.720100  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:55.219569  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:55.719796  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:56.219490  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:56.720420  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:57.219497  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:57.720337  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:58.219807  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:58.720112  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:59.219949  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:59.719626  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:00.219871  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:00.719466  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:01.219491  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:01.719760  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:02.220337  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:02.720145  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:03.219463  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:03.719913  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:04.219813  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:04.719940  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:05.219830  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:05.720324  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:06.220287  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:06.719584  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:07.219989  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:07.720289  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:08.220381  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:08.719947  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:09.219838  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:09.719666  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:10.219756  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:10.720312  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:11.220369  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:11.720004  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:12.220304  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:12.720348  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:13.219553  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:13.720078  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:14.219614  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:14.719625  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:15.220118  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:15.720577  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:16.220392  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:16.719538  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:17.220437  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:17.220539  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:17.272666  585602 cri.go:89] found id: ""
	I1205 20:32:17.272702  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.272716  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:17.272723  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:17.272797  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:17.314947  585602 cri.go:89] found id: ""
	I1205 20:32:17.314977  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.314989  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:17.314996  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:17.315061  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:17.354511  585602 cri.go:89] found id: ""
	I1205 20:32:17.354548  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.354561  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:17.354571  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:17.354640  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:17.393711  585602 cri.go:89] found id: ""
	I1205 20:32:17.393745  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.393759  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:17.393768  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:17.393836  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:17.434493  585602 cri.go:89] found id: ""
	I1205 20:32:17.434526  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.434535  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:17.434541  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:17.434602  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:17.476201  585602 cri.go:89] found id: ""
	I1205 20:32:17.476235  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.476245  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:17.476253  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:17.476341  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:17.516709  585602 cri.go:89] found id: ""
	I1205 20:32:17.516745  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.516755  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:17.516762  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:17.516818  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:17.557270  585602 cri.go:89] found id: ""
	I1205 20:32:17.557305  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.557314  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:17.557324  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:17.557348  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:17.606494  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:17.606540  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:17.681372  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:17.681412  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:17.696778  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:17.696816  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:17.839655  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:17.839679  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:17.839717  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:20.423552  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:20.439794  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:20.439875  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:20.482820  585602 cri.go:89] found id: ""
	I1205 20:32:20.482866  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.482880  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:20.482888  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:20.482958  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:20.523590  585602 cri.go:89] found id: ""
	I1205 20:32:20.523629  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.523641  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:20.523649  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:20.523727  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:20.601603  585602 cri.go:89] found id: ""
	I1205 20:32:20.601638  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.601648  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:20.601656  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:20.601728  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:20.643927  585602 cri.go:89] found id: ""
	I1205 20:32:20.643959  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.643972  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:20.643981  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:20.644054  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:20.690935  585602 cri.go:89] found id: ""
	I1205 20:32:20.690964  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.690975  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:20.690984  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:20.691054  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:20.728367  585602 cri.go:89] found id: ""
	I1205 20:32:20.728400  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.728412  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:20.728420  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:20.728489  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:20.766529  585602 cri.go:89] found id: ""
	I1205 20:32:20.766562  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.766571  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:20.766578  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:20.766657  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:20.805641  585602 cri.go:89] found id: ""
	I1205 20:32:20.805680  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.805690  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:20.805701  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:20.805718  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:20.884460  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:20.884495  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:20.884514  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:20.998367  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:20.998429  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:21.041210  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:21.041247  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:21.103519  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:21.103557  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:23.619187  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:23.633782  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:23.633872  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:23.679994  585602 cri.go:89] found id: ""
	I1205 20:32:23.680023  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.680032  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:23.680038  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:23.680094  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:23.718362  585602 cri.go:89] found id: ""
	I1205 20:32:23.718425  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.718439  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:23.718447  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:23.718520  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:23.758457  585602 cri.go:89] found id: ""
	I1205 20:32:23.758491  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.758500  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:23.758506  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:23.758558  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:23.794612  585602 cri.go:89] found id: ""
	I1205 20:32:23.794649  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.794662  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:23.794671  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:23.794738  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:23.832309  585602 cri.go:89] found id: ""
	I1205 20:32:23.832341  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.832354  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:23.832361  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:23.832421  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:23.868441  585602 cri.go:89] found id: ""
	I1205 20:32:23.868472  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.868484  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:23.868492  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:23.868573  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:23.902996  585602 cri.go:89] found id: ""
	I1205 20:32:23.903025  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.903036  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:23.903050  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:23.903115  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:23.939830  585602 cri.go:89] found id: ""
	I1205 20:32:23.939865  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.939879  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:23.939892  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:23.939909  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:23.992310  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:23.992354  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:24.007378  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:24.007414  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:24.077567  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:24.077594  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:24.077608  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:24.165120  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:24.165163  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:26.711674  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:26.726923  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:26.727008  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:26.763519  585602 cri.go:89] found id: ""
	I1205 20:32:26.763554  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.763563  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:26.763570  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:26.763628  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:26.802600  585602 cri.go:89] found id: ""
	I1205 20:32:26.802635  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.802644  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:26.802650  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:26.802705  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:26.839920  585602 cri.go:89] found id: ""
	I1205 20:32:26.839967  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.839981  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:26.839989  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:26.840076  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:26.876377  585602 cri.go:89] found id: ""
	I1205 20:32:26.876406  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.876416  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:26.876422  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:26.876491  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:26.913817  585602 cri.go:89] found id: ""
	I1205 20:32:26.913845  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.913854  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:26.913862  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:26.913936  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:26.955739  585602 cri.go:89] found id: ""
	I1205 20:32:26.955775  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.955788  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:26.955798  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:26.955863  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:26.996191  585602 cri.go:89] found id: ""
	I1205 20:32:26.996223  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.996234  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:26.996242  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:26.996341  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:27.040905  585602 cri.go:89] found id: ""
	I1205 20:32:27.040935  585602 logs.go:282] 0 containers: []
	W1205 20:32:27.040947  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:27.040958  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:27.040973  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:27.098103  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:27.098140  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:27.116538  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:27.116574  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:27.204154  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:27.204187  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:27.204208  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:27.300380  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:27.300431  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:29.840944  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:29.855784  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:29.855869  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:29.893728  585602 cri.go:89] found id: ""
	I1205 20:32:29.893765  585602 logs.go:282] 0 containers: []
	W1205 20:32:29.893777  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:29.893786  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:29.893867  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:29.930138  585602 cri.go:89] found id: ""
	I1205 20:32:29.930176  585602 logs.go:282] 0 containers: []
	W1205 20:32:29.930186  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:29.930193  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:29.930248  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:29.966340  585602 cri.go:89] found id: ""
	I1205 20:32:29.966371  585602 logs.go:282] 0 containers: []
	W1205 20:32:29.966380  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:29.966387  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:29.966463  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:30.003868  585602 cri.go:89] found id: ""
	I1205 20:32:30.003900  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.003920  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:30.003928  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:30.004001  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:30.044332  585602 cri.go:89] found id: ""
	I1205 20:32:30.044363  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.044373  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:30.044380  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:30.044445  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:30.088044  585602 cri.go:89] found id: ""
	I1205 20:32:30.088085  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.088098  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:30.088106  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:30.088173  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:30.124221  585602 cri.go:89] found id: ""
	I1205 20:32:30.124248  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.124258  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:30.124285  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:30.124357  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:30.162092  585602 cri.go:89] found id: ""
	I1205 20:32:30.162121  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.162133  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:30.162146  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:30.162162  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:30.218526  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:30.218567  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:30.232240  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:30.232292  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:30.308228  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:30.308260  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:30.308296  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:30.389348  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:30.389391  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:32.934497  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:32.949404  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:32.949488  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:33.006117  585602 cri.go:89] found id: ""
	I1205 20:32:33.006148  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.006157  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:33.006163  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:33.006231  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:33.064907  585602 cri.go:89] found id: ""
	I1205 20:32:33.064945  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.064958  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:33.064966  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:33.065031  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:33.101268  585602 cri.go:89] found id: ""
	I1205 20:32:33.101295  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.101304  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:33.101310  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:33.101378  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:33.141705  585602 cri.go:89] found id: ""
	I1205 20:32:33.141733  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.141743  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:33.141750  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:33.141810  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:33.180983  585602 cri.go:89] found id: ""
	I1205 20:32:33.181011  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.181020  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:33.181026  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:33.181086  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:33.220742  585602 cri.go:89] found id: ""
	I1205 20:32:33.220779  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.220791  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:33.220799  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:33.220871  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:33.255980  585602 cri.go:89] found id: ""
	I1205 20:32:33.256009  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.256017  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:33.256024  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:33.256080  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:33.292978  585602 cri.go:89] found id: ""
	I1205 20:32:33.293005  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.293013  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:33.293023  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:33.293034  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:33.347167  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:33.347213  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:33.361367  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:33.361408  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:33.435871  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:33.435915  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:33.435932  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:33.518835  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:33.518880  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:36.066359  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:36.080867  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:36.080947  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:36.117647  585602 cri.go:89] found id: ""
	I1205 20:32:36.117678  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.117689  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:36.117697  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:36.117763  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:36.154376  585602 cri.go:89] found id: ""
	I1205 20:32:36.154412  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.154428  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:36.154436  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:36.154498  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:36.193225  585602 cri.go:89] found id: ""
	I1205 20:32:36.193261  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.193274  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:36.193282  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:36.193347  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:36.230717  585602 cri.go:89] found id: ""
	I1205 20:32:36.230748  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.230758  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:36.230764  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:36.230817  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:36.270186  585602 cri.go:89] found id: ""
	I1205 20:32:36.270238  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.270252  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:36.270262  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:36.270340  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:36.306378  585602 cri.go:89] found id: ""
	I1205 20:32:36.306425  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.306438  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:36.306447  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:36.306531  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:36.342256  585602 cri.go:89] found id: ""
	I1205 20:32:36.342289  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.342300  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:36.342306  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:36.342380  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:36.380684  585602 cri.go:89] found id: ""
	I1205 20:32:36.380718  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.380732  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:36.380745  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:36.380768  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:36.436066  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:36.436109  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:36.450255  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:36.450285  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:36.521857  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:36.521883  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:36.521897  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:36.608349  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:36.608395  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:39.157366  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:39.171267  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:39.171357  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:39.214459  585602 cri.go:89] found id: ""
	I1205 20:32:39.214490  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.214520  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:39.214528  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:39.214583  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:39.250312  585602 cri.go:89] found id: ""
	I1205 20:32:39.250352  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.250366  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:39.250375  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:39.250437  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:39.286891  585602 cri.go:89] found id: ""
	I1205 20:32:39.286932  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.286944  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:39.286952  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:39.287019  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:39.323923  585602 cri.go:89] found id: ""
	I1205 20:32:39.323958  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.323970  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:39.323979  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:39.324053  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:39.360280  585602 cri.go:89] found id: ""
	I1205 20:32:39.360322  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.360331  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:39.360337  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:39.360403  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:39.397599  585602 cri.go:89] found id: ""
	I1205 20:32:39.397637  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.397650  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:39.397659  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:39.397731  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:39.435132  585602 cri.go:89] found id: ""
	I1205 20:32:39.435159  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.435168  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:39.435174  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:39.435241  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:39.470653  585602 cri.go:89] found id: ""
	I1205 20:32:39.470682  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.470690  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:39.470700  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:39.470714  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:39.511382  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:39.511413  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:39.563955  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:39.563994  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:39.578015  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:39.578044  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:39.658505  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:39.658535  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:39.658550  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:42.248607  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:42.263605  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:42.263688  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:42.305480  585602 cri.go:89] found id: ""
	I1205 20:32:42.305508  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.305519  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:42.305527  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:42.305595  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:42.339969  585602 cri.go:89] found id: ""
	I1205 20:32:42.340001  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.340010  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:42.340016  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:42.340090  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:42.381594  585602 cri.go:89] found id: ""
	I1205 20:32:42.381630  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.381643  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:42.381651  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:42.381771  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:42.435039  585602 cri.go:89] found id: ""
	I1205 20:32:42.435072  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.435085  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:42.435093  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:42.435162  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:42.470567  585602 cri.go:89] found id: ""
	I1205 20:32:42.470595  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.470604  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:42.470610  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:42.470674  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:42.510695  585602 cri.go:89] found id: ""
	I1205 20:32:42.510723  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.510731  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:42.510738  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:42.510793  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:42.547687  585602 cri.go:89] found id: ""
	I1205 20:32:42.547711  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.547718  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:42.547735  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:42.547784  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:42.587160  585602 cri.go:89] found id: ""
	I1205 20:32:42.587191  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.587199  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:42.587211  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:42.587225  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:42.669543  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:42.669587  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:42.717795  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:42.717833  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:42.772644  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:42.772696  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:42.788443  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:42.788480  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:42.861560  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:45.362758  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:45.377178  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:45.377266  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:45.413055  585602 cri.go:89] found id: ""
	I1205 20:32:45.413088  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.413102  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:45.413111  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:45.413176  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:45.453769  585602 cri.go:89] found id: ""
	I1205 20:32:45.453799  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.453808  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:45.453813  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:45.453879  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:45.499481  585602 cri.go:89] found id: ""
	I1205 20:32:45.499511  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.499522  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:45.499531  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:45.499598  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:45.537603  585602 cri.go:89] found id: ""
	I1205 20:32:45.537638  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.537647  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:45.537653  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:45.537707  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:45.572430  585602 cri.go:89] found id: ""
	I1205 20:32:45.572463  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.572471  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:45.572479  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:45.572556  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:45.610349  585602 cri.go:89] found id: ""
	I1205 20:32:45.610387  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.610398  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:45.610406  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:45.610476  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:45.649983  585602 cri.go:89] found id: ""
	I1205 20:32:45.650018  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.650031  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:45.650038  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:45.650113  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:45.689068  585602 cri.go:89] found id: ""
	I1205 20:32:45.689099  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.689107  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:45.689118  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:45.689131  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:45.743715  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:45.743758  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:45.759803  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:45.759834  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:45.835107  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:45.835133  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:45.835146  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:45.914590  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:45.914632  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:48.456633  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:48.475011  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:48.475086  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:48.512878  585602 cri.go:89] found id: ""
	I1205 20:32:48.512913  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.512925  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:48.512933  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:48.513002  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:48.551708  585602 cri.go:89] found id: ""
	I1205 20:32:48.551737  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.551744  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:48.551751  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:48.551805  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:48.590765  585602 cri.go:89] found id: ""
	I1205 20:32:48.590791  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.590800  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:48.590806  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:48.590859  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:48.629447  585602 cri.go:89] found id: ""
	I1205 20:32:48.629473  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.629481  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:48.629487  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:48.629540  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:48.667299  585602 cri.go:89] found id: ""
	I1205 20:32:48.667329  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.667339  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:48.667347  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:48.667414  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:48.703771  585602 cri.go:89] found id: ""
	I1205 20:32:48.703816  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.703830  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:48.703841  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:48.703911  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:48.747064  585602 cri.go:89] found id: ""
	I1205 20:32:48.747098  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.747111  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:48.747118  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:48.747186  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:48.786608  585602 cri.go:89] found id: ""
	I1205 20:32:48.786649  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.786663  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:48.786684  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:48.786700  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:48.860834  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:48.860866  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:48.860881  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:48.944029  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:48.944082  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:48.982249  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:48.982284  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:49.036460  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:49.036509  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:51.556456  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:51.571498  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:51.571590  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:51.616890  585602 cri.go:89] found id: ""
	I1205 20:32:51.616924  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.616934  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:51.616942  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:51.617008  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:51.660397  585602 cri.go:89] found id: ""
	I1205 20:32:51.660433  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.660445  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:51.660453  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:51.660543  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:51.698943  585602 cri.go:89] found id: ""
	I1205 20:32:51.698973  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.698981  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:51.698988  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:51.699041  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:51.737254  585602 cri.go:89] found id: ""
	I1205 20:32:51.737288  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.737297  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:51.737310  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:51.737366  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:51.775560  585602 cri.go:89] found id: ""
	I1205 20:32:51.775592  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.775600  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:51.775606  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:51.775681  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:51.814314  585602 cri.go:89] found id: ""
	I1205 20:32:51.814370  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.814383  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:51.814393  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:51.814464  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:51.849873  585602 cri.go:89] found id: ""
	I1205 20:32:51.849913  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.849935  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:51.849944  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:51.850018  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:51.891360  585602 cri.go:89] found id: ""
	I1205 20:32:51.891388  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.891400  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:51.891412  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:51.891429  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:51.943812  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:51.943854  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:51.959119  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:51.959152  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:52.036014  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:52.036040  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:52.036059  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:52.114080  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:52.114122  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:54.657243  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:54.672319  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:54.672407  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:54.708446  585602 cri.go:89] found id: ""
	I1205 20:32:54.708475  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.708484  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:54.708491  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:54.708569  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:54.747309  585602 cri.go:89] found id: ""
	I1205 20:32:54.747347  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.747359  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:54.747370  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:54.747451  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:54.790742  585602 cri.go:89] found id: ""
	I1205 20:32:54.790772  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.790781  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:54.790787  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:54.790853  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:54.828857  585602 cri.go:89] found id: ""
	I1205 20:32:54.828885  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.828894  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:54.828902  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:54.828964  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:54.867691  585602 cri.go:89] found id: ""
	I1205 20:32:54.867729  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.867740  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:54.867747  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:54.867819  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:54.907216  585602 cri.go:89] found id: ""
	I1205 20:32:54.907242  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.907249  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:54.907256  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:54.907308  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:54.945800  585602 cri.go:89] found id: ""
	I1205 20:32:54.945827  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.945837  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:54.945844  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:54.945895  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:54.993176  585602 cri.go:89] found id: ""
	I1205 20:32:54.993216  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.993228  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:54.993242  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:54.993258  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:55.045797  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:55.045835  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:55.060103  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:55.060136  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:55.129440  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:55.129467  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:55.129485  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:55.214949  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:55.214999  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:57.755086  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:57.769533  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:57.769622  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:57.807812  585602 cri.go:89] found id: ""
	I1205 20:32:57.807847  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.807858  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:57.807869  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:57.807941  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:57.846179  585602 cri.go:89] found id: ""
	I1205 20:32:57.846209  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.846223  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:57.846232  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:57.846305  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:57.881438  585602 cri.go:89] found id: ""
	I1205 20:32:57.881473  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.881482  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:57.881496  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:57.881553  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:57.918242  585602 cri.go:89] found id: ""
	I1205 20:32:57.918283  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.918294  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:57.918302  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:57.918378  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:57.962825  585602 cri.go:89] found id: ""
	I1205 20:32:57.962863  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.962873  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:57.962879  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:57.962955  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:58.004655  585602 cri.go:89] found id: ""
	I1205 20:32:58.004699  585602 logs.go:282] 0 containers: []
	W1205 20:32:58.004711  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:58.004731  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:58.004802  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:58.043701  585602 cri.go:89] found id: ""
	I1205 20:32:58.043730  585602 logs.go:282] 0 containers: []
	W1205 20:32:58.043738  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:58.043744  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:58.043802  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:58.081400  585602 cri.go:89] found id: ""
	I1205 20:32:58.081437  585602 logs.go:282] 0 containers: []
	W1205 20:32:58.081450  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:58.081463  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:58.081486  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:58.135531  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:58.135573  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:58.149962  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:58.149998  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:58.227810  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:58.227834  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:58.227849  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:58.308173  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:58.308219  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:00.848019  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:00.863423  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:00.863496  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:00.902526  585602 cri.go:89] found id: ""
	I1205 20:33:00.902553  585602 logs.go:282] 0 containers: []
	W1205 20:33:00.902561  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:00.902567  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:00.902621  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:00.939891  585602 cri.go:89] found id: ""
	I1205 20:33:00.939932  585602 logs.go:282] 0 containers: []
	W1205 20:33:00.939942  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:00.939948  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:00.940022  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:00.981645  585602 cri.go:89] found id: ""
	I1205 20:33:00.981676  585602 logs.go:282] 0 containers: []
	W1205 20:33:00.981684  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:00.981691  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:00.981745  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:01.027753  585602 cri.go:89] found id: ""
	I1205 20:33:01.027780  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.027789  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:01.027795  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:01.027877  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:01.064529  585602 cri.go:89] found id: ""
	I1205 20:33:01.064559  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.064567  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:01.064574  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:01.064628  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:01.102239  585602 cri.go:89] found id: ""
	I1205 20:33:01.102272  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.102281  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:01.102287  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:01.102357  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:01.139723  585602 cri.go:89] found id: ""
	I1205 20:33:01.139760  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.139770  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:01.139778  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:01.139845  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:01.176172  585602 cri.go:89] found id: ""
	I1205 20:33:01.176198  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.176207  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:01.176216  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:01.176231  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:01.230085  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:01.230133  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:01.245574  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:01.245617  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:01.340483  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:01.340520  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:01.340537  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:01.416925  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:01.416972  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:03.958855  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:03.974024  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:03.974096  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:04.021407  585602 cri.go:89] found id: ""
	I1205 20:33:04.021442  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.021451  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:04.021458  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:04.021523  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:04.063385  585602 cri.go:89] found id: ""
	I1205 20:33:04.063414  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.063423  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:04.063430  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:04.063488  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:04.103693  585602 cri.go:89] found id: ""
	I1205 20:33:04.103735  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.103747  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:04.103756  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:04.103815  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:04.143041  585602 cri.go:89] found id: ""
	I1205 20:33:04.143072  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.143100  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:04.143109  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:04.143179  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:04.180668  585602 cri.go:89] found id: ""
	I1205 20:33:04.180702  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.180712  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:04.180718  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:04.180778  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:04.221848  585602 cri.go:89] found id: ""
	I1205 20:33:04.221885  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.221894  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:04.221901  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:04.222018  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:04.263976  585602 cri.go:89] found id: ""
	I1205 20:33:04.264014  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.264024  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:04.264030  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:04.264097  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:04.298698  585602 cri.go:89] found id: ""
	I1205 20:33:04.298726  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.298737  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:04.298751  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:04.298767  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:04.347604  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:04.347659  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:04.361325  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:04.361361  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:04.437679  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:04.437704  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:04.437720  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:04.520043  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:04.520103  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:07.070687  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:07.085290  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:07.085367  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:07.126233  585602 cri.go:89] found id: ""
	I1205 20:33:07.126265  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.126276  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:07.126285  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:07.126346  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:07.163004  585602 cri.go:89] found id: ""
	I1205 20:33:07.163040  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.163053  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:07.163061  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:07.163126  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:07.201372  585602 cri.go:89] found id: ""
	I1205 20:33:07.201412  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.201425  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:07.201435  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:07.201509  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:07.237762  585602 cri.go:89] found id: ""
	I1205 20:33:07.237795  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.237807  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:07.237815  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:07.237885  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:07.273940  585602 cri.go:89] found id: ""
	I1205 20:33:07.273976  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.273985  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:07.273995  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:07.274057  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:07.311028  585602 cri.go:89] found id: ""
	I1205 20:33:07.311061  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.311070  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:07.311076  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:07.311131  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:07.347386  585602 cri.go:89] found id: ""
	I1205 20:33:07.347422  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.347433  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:07.347441  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:07.347503  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:07.386412  585602 cri.go:89] found id: ""
	I1205 20:33:07.386446  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.386458  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:07.386471  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:07.386489  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:07.430250  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:07.430280  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:07.483936  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:07.483982  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:07.498201  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:07.498236  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:07.576741  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:07.576767  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:07.576780  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:10.164792  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:10.178516  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:10.178596  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:10.215658  585602 cri.go:89] found id: ""
	I1205 20:33:10.215692  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.215702  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:10.215711  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:10.215779  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:10.251632  585602 cri.go:89] found id: ""
	I1205 20:33:10.251671  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.251683  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:10.251691  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:10.251763  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:10.295403  585602 cri.go:89] found id: ""
	I1205 20:33:10.295435  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.295453  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:10.295460  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:10.295513  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:10.329747  585602 cri.go:89] found id: ""
	I1205 20:33:10.329778  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.329787  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:10.329793  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:10.329871  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:10.369975  585602 cri.go:89] found id: ""
	I1205 20:33:10.370016  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.370028  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:10.370036  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:10.370104  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:10.408146  585602 cri.go:89] found id: ""
	I1205 20:33:10.408183  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.408196  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:10.408204  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:10.408288  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:10.443803  585602 cri.go:89] found id: ""
	I1205 20:33:10.443839  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.443850  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:10.443858  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:10.443932  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:10.481784  585602 cri.go:89] found id: ""
	I1205 20:33:10.481826  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.481840  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:10.481854  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:10.481872  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:10.531449  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:10.531498  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:10.549258  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:10.549288  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:10.620162  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:10.620189  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:10.620206  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:10.704656  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:10.704706  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:13.251518  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:13.264731  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:13.264815  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:13.297816  585602 cri.go:89] found id: ""
	I1205 20:33:13.297846  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.297855  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:13.297861  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:13.297918  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:13.330696  585602 cri.go:89] found id: ""
	I1205 20:33:13.330724  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.330732  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:13.330738  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:13.330789  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:13.366257  585602 cri.go:89] found id: ""
	I1205 20:33:13.366304  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.366315  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:13.366321  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:13.366385  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:13.403994  585602 cri.go:89] found id: ""
	I1205 20:33:13.404030  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.404042  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:13.404051  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:13.404121  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:13.450160  585602 cri.go:89] found id: ""
	I1205 20:33:13.450189  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.450198  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:13.450205  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:13.450262  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:13.502593  585602 cri.go:89] found id: ""
	I1205 20:33:13.502629  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.502640  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:13.502650  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:13.502720  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:13.548051  585602 cri.go:89] found id: ""
	I1205 20:33:13.548084  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.548095  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:13.548103  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:13.548166  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:13.593913  585602 cri.go:89] found id: ""
	I1205 20:33:13.593947  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.593960  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:13.593975  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:13.593997  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:13.674597  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:13.674628  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:13.674647  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:13.760747  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:13.760796  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:13.804351  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:13.804383  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:13.856896  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:13.856958  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:16.372754  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:16.387165  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:16.387242  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:16.426612  585602 cri.go:89] found id: ""
	I1205 20:33:16.426655  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.426668  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:16.426676  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:16.426734  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:16.461936  585602 cri.go:89] found id: ""
	I1205 20:33:16.461974  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.461988  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:16.461997  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:16.462060  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:16.498010  585602 cri.go:89] found id: ""
	I1205 20:33:16.498044  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.498062  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:16.498069  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:16.498133  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:16.533825  585602 cri.go:89] found id: ""
	I1205 20:33:16.533854  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.533863  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:16.533869  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:16.533941  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:16.570834  585602 cri.go:89] found id: ""
	I1205 20:33:16.570875  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.570887  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:16.570896  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:16.570968  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:16.605988  585602 cri.go:89] found id: ""
	I1205 20:33:16.606026  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.606038  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:16.606047  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:16.606140  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:16.645148  585602 cri.go:89] found id: ""
	I1205 20:33:16.645178  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.645188  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:16.645195  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:16.645261  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:16.682449  585602 cri.go:89] found id: ""
	I1205 20:33:16.682479  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.682491  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:16.682502  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:16.682519  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:16.696944  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:16.696980  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:16.777034  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:16.777064  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:16.777078  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:16.854812  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:16.854880  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:16.905101  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:16.905131  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:19.463427  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:19.477135  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:19.477233  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:19.529213  585602 cri.go:89] found id: ""
	I1205 20:33:19.529248  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.529264  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:19.529274  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:19.529359  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:19.575419  585602 cri.go:89] found id: ""
	I1205 20:33:19.575453  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.575465  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:19.575474  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:19.575546  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:19.616657  585602 cri.go:89] found id: ""
	I1205 20:33:19.616691  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.616704  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:19.616713  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:19.616787  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:19.653142  585602 cri.go:89] found id: ""
	I1205 20:33:19.653177  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.653189  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:19.653198  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:19.653267  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:19.690504  585602 cri.go:89] found id: ""
	I1205 20:33:19.690544  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.690555  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:19.690563  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:19.690635  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:19.730202  585602 cri.go:89] found id: ""
	I1205 20:33:19.730229  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.730237  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:19.730245  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:19.730302  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:19.767212  585602 cri.go:89] found id: ""
	I1205 20:33:19.767243  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.767255  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:19.767264  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:19.767336  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:19.803089  585602 cri.go:89] found id: ""
	I1205 20:33:19.803125  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.803137  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:19.803163  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:19.803180  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:19.884542  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:19.884589  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:19.925257  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:19.925303  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:19.980457  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:19.980510  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:19.997026  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:19.997057  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:20.075062  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:22.575469  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:22.588686  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:22.588768  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:22.622824  585602 cri.go:89] found id: ""
	I1205 20:33:22.622860  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.622868  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:22.622874  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:22.622931  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:22.659964  585602 cri.go:89] found id: ""
	I1205 20:33:22.660059  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.660074  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:22.660085  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:22.660153  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:22.695289  585602 cri.go:89] found id: ""
	I1205 20:33:22.695325  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.695337  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:22.695345  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:22.695417  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:22.734766  585602 cri.go:89] found id: ""
	I1205 20:33:22.734801  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.734813  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:22.734821  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:22.734896  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:22.773778  585602 cri.go:89] found id: ""
	I1205 20:33:22.773806  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.773818  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:22.773826  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:22.773899  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:22.811468  585602 cri.go:89] found id: ""
	I1205 20:33:22.811503  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.811514  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:22.811521  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:22.811591  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:22.852153  585602 cri.go:89] found id: ""
	I1205 20:33:22.852210  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.852221  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:22.852227  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:22.852318  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:22.888091  585602 cri.go:89] found id: ""
	I1205 20:33:22.888120  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.888129  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:22.888139  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:22.888155  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:22.943210  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:22.943252  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:22.958356  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:22.958393  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:23.026732  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:23.026770  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:23.026788  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:23.106356  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:23.106395  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:25.650832  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:25.665392  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:25.665475  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:25.701109  585602 cri.go:89] found id: ""
	I1205 20:33:25.701146  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.701155  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:25.701162  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:25.701231  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:25.738075  585602 cri.go:89] found id: ""
	I1205 20:33:25.738108  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.738117  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:25.738123  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:25.738176  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:25.775031  585602 cri.go:89] found id: ""
	I1205 20:33:25.775078  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.775090  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:25.775100  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:25.775173  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:25.811343  585602 cri.go:89] found id: ""
	I1205 20:33:25.811376  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.811386  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:25.811395  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:25.811471  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:25.846635  585602 cri.go:89] found id: ""
	I1205 20:33:25.846674  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.846684  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:25.846692  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:25.846766  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:25.881103  585602 cri.go:89] found id: ""
	I1205 20:33:25.881136  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.881145  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:25.881151  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:25.881224  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:25.917809  585602 cri.go:89] found id: ""
	I1205 20:33:25.917844  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.917855  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:25.917864  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:25.917936  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:25.955219  585602 cri.go:89] found id: ""
	I1205 20:33:25.955245  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.955254  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:25.955264  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:25.955276  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:26.007016  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:26.007059  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:26.021554  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:26.021601  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:26.099290  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:26.099321  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:26.099334  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:26.182955  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:26.182993  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:28.725201  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:28.739515  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:28.739602  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:28.778187  585602 cri.go:89] found id: ""
	I1205 20:33:28.778230  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.778242  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:28.778249  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:28.778315  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:28.815788  585602 cri.go:89] found id: ""
	I1205 20:33:28.815826  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.815838  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:28.815845  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:28.815912  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:28.852222  585602 cri.go:89] found id: ""
	I1205 20:33:28.852251  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.852261  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:28.852289  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:28.852362  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:28.889742  585602 cri.go:89] found id: ""
	I1205 20:33:28.889776  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.889787  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:28.889794  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:28.889859  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:28.926872  585602 cri.go:89] found id: ""
	I1205 20:33:28.926903  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.926912  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:28.926919  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:28.926972  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:28.963380  585602 cri.go:89] found id: ""
	I1205 20:33:28.963418  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.963432  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:28.963441  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:28.963509  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:29.000711  585602 cri.go:89] found id: ""
	I1205 20:33:29.000746  585602 logs.go:282] 0 containers: []
	W1205 20:33:29.000764  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:29.000772  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:29.000848  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:29.035934  585602 cri.go:89] found id: ""
	I1205 20:33:29.035963  585602 logs.go:282] 0 containers: []
	W1205 20:33:29.035974  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:29.035987  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:29.036003  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:29.091336  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:29.091382  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:29.105784  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:29.105814  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:29.182038  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:29.182078  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:29.182095  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:29.261107  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:29.261153  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:31.802911  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:31.817285  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:31.817369  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:31.854865  585602 cri.go:89] found id: ""
	I1205 20:33:31.854900  585602 logs.go:282] 0 containers: []
	W1205 20:33:31.854914  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:31.854922  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:31.854995  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:31.893928  585602 cri.go:89] found id: ""
	I1205 20:33:31.893964  585602 logs.go:282] 0 containers: []
	W1205 20:33:31.893977  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:31.893984  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:31.894053  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:31.929490  585602 cri.go:89] found id: ""
	I1205 20:33:31.929527  585602 logs.go:282] 0 containers: []
	W1205 20:33:31.929540  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:31.929548  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:31.929637  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:31.964185  585602 cri.go:89] found id: ""
	I1205 20:33:31.964211  585602 logs.go:282] 0 containers: []
	W1205 20:33:31.964219  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:31.964225  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:31.964291  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:32.002708  585602 cri.go:89] found id: ""
	I1205 20:33:32.002748  585602 logs.go:282] 0 containers: []
	W1205 20:33:32.002760  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:32.002768  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:32.002847  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:32.040619  585602 cri.go:89] found id: ""
	I1205 20:33:32.040712  585602 logs.go:282] 0 containers: []
	W1205 20:33:32.040740  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:32.040758  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:32.040839  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:32.079352  585602 cri.go:89] found id: ""
	I1205 20:33:32.079390  585602 logs.go:282] 0 containers: []
	W1205 20:33:32.079404  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:32.079412  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:32.079484  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:32.117560  585602 cri.go:89] found id: ""
	I1205 20:33:32.117596  585602 logs.go:282] 0 containers: []
	W1205 20:33:32.117608  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:32.117629  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:32.117653  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:32.172639  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:32.172686  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:32.187687  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:32.187727  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:32.265000  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:32.265034  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:32.265051  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:32.348128  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:32.348176  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:34.890144  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:34.903953  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:34.904032  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:34.939343  585602 cri.go:89] found id: ""
	I1205 20:33:34.939374  585602 logs.go:282] 0 containers: []
	W1205 20:33:34.939383  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:34.939389  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:34.939444  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:34.978225  585602 cri.go:89] found id: ""
	I1205 20:33:34.978266  585602 logs.go:282] 0 containers: []
	W1205 20:33:34.978278  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:34.978286  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:34.978363  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:35.015918  585602 cri.go:89] found id: ""
	I1205 20:33:35.015950  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.015960  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:35.015966  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:35.016032  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:35.053222  585602 cri.go:89] found id: ""
	I1205 20:33:35.053249  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.053257  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:35.053264  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:35.053320  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:35.088369  585602 cri.go:89] found id: ""
	I1205 20:33:35.088401  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.088412  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:35.088421  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:35.088498  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:35.135290  585602 cri.go:89] found id: ""
	I1205 20:33:35.135327  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.135338  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:35.135346  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:35.135412  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:35.174959  585602 cri.go:89] found id: ""
	I1205 20:33:35.174996  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.175008  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:35.175017  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:35.175097  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:35.215101  585602 cri.go:89] found id: ""
	I1205 20:33:35.215134  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.215143  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:35.215152  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:35.215167  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:35.269372  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:35.269414  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:35.285745  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:35.285776  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:35.364774  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:35.364807  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:35.364824  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:35.445932  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:35.445980  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:37.996837  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:38.010545  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:38.010612  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:38.048292  585602 cri.go:89] found id: ""
	I1205 20:33:38.048334  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.048350  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:38.048360  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:38.048429  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:38.086877  585602 cri.go:89] found id: ""
	I1205 20:33:38.086911  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.086921  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:38.086927  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:38.087001  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:38.122968  585602 cri.go:89] found id: ""
	I1205 20:33:38.122999  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.123010  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:38.123018  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:38.123082  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:38.164901  585602 cri.go:89] found id: ""
	I1205 20:33:38.164940  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.164949  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:38.164955  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:38.165006  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:38.200697  585602 cri.go:89] found id: ""
	I1205 20:33:38.200725  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.200734  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:38.200740  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:38.200803  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:38.240306  585602 cri.go:89] found id: ""
	I1205 20:33:38.240338  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.240347  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:38.240354  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:38.240424  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:38.275788  585602 cri.go:89] found id: ""
	I1205 20:33:38.275823  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.275835  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:38.275844  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:38.275917  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:38.311431  585602 cri.go:89] found id: ""
	I1205 20:33:38.311468  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.311480  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:38.311493  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:38.311507  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:38.361472  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:38.361515  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:38.375970  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:38.376004  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:38.450913  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:38.450941  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:38.450961  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:38.527620  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:38.527666  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:41.072438  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:41.086085  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:41.086168  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:41.123822  585602 cri.go:89] found id: ""
	I1205 20:33:41.123852  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.123861  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:41.123868  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:41.123919  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:41.160343  585602 cri.go:89] found id: ""
	I1205 20:33:41.160371  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.160380  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:41.160389  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:41.160457  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:41.198212  585602 cri.go:89] found id: ""
	I1205 20:33:41.198240  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.198249  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:41.198255  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:41.198309  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:41.233793  585602 cri.go:89] found id: ""
	I1205 20:33:41.233824  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.233832  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:41.233838  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:41.233890  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:41.269397  585602 cri.go:89] found id: ""
	I1205 20:33:41.269435  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.269447  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:41.269457  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:41.269529  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:41.303079  585602 cri.go:89] found id: ""
	I1205 20:33:41.303116  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.303128  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:41.303136  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:41.303196  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:41.337784  585602 cri.go:89] found id: ""
	I1205 20:33:41.337817  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.337826  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:41.337832  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:41.337901  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:41.371410  585602 cri.go:89] found id: ""
	I1205 20:33:41.371438  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.371446  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:41.371456  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:41.371467  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:41.422768  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:41.422807  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:41.437427  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:41.437461  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:41.510875  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:41.510898  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:41.510915  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:41.590783  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:41.590826  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:44.136390  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:44.149935  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:44.150006  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:44.187807  585602 cri.go:89] found id: ""
	I1205 20:33:44.187846  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.187858  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:44.187866  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:44.187933  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:44.224937  585602 cri.go:89] found id: ""
	I1205 20:33:44.224965  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.224973  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:44.224978  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:44.225040  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:44.260230  585602 cri.go:89] found id: ""
	I1205 20:33:44.260274  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.260287  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:44.260297  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:44.260439  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:44.296410  585602 cri.go:89] found id: ""
	I1205 20:33:44.296439  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.296449  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:44.296455  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:44.296507  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:44.332574  585602 cri.go:89] found id: ""
	I1205 20:33:44.332623  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.332635  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:44.332642  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:44.332709  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:44.368925  585602 cri.go:89] found id: ""
	I1205 20:33:44.368973  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.368985  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:44.368994  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:44.369068  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:44.410041  585602 cri.go:89] found id: ""
	I1205 20:33:44.410075  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.410088  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:44.410095  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:44.410165  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:44.454254  585602 cri.go:89] found id: ""
	I1205 20:33:44.454295  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.454316  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:44.454330  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:44.454346  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:44.507604  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:44.507669  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:44.525172  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:44.525219  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:44.599417  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:44.599446  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:44.599465  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:44.681624  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:44.681685  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:47.230092  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:47.243979  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:47.244076  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:47.280346  585602 cri.go:89] found id: ""
	I1205 20:33:47.280376  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.280385  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:47.280392  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:47.280448  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:47.316454  585602 cri.go:89] found id: ""
	I1205 20:33:47.316479  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.316487  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:47.316493  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:47.316546  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:47.353339  585602 cri.go:89] found id: ""
	I1205 20:33:47.353374  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.353386  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:47.353395  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:47.353466  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:47.388256  585602 cri.go:89] found id: ""
	I1205 20:33:47.388319  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.388330  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:47.388339  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:47.388408  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:47.424907  585602 cri.go:89] found id: ""
	I1205 20:33:47.424942  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.424953  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:47.424961  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:47.425035  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:47.461386  585602 cri.go:89] found id: ""
	I1205 20:33:47.461416  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.461425  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:47.461431  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:47.461485  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:47.501092  585602 cri.go:89] found id: ""
	I1205 20:33:47.501121  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.501130  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:47.501136  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:47.501189  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:47.559478  585602 cri.go:89] found id: ""
	I1205 20:33:47.559507  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.559520  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:47.559533  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:47.559551  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:47.609761  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:47.609800  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:47.626579  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:47.626606  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:47.713490  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:47.713520  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:47.713540  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:47.795346  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:47.795398  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:50.339441  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:50.353134  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:50.353216  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:50.393950  585602 cri.go:89] found id: ""
	I1205 20:33:50.393979  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.393990  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:50.394007  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:50.394074  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:50.431166  585602 cri.go:89] found id: ""
	I1205 20:33:50.431201  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.431212  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:50.431221  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:50.431291  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:50.472641  585602 cri.go:89] found id: ""
	I1205 20:33:50.472674  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.472684  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:50.472692  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:50.472763  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:50.512111  585602 cri.go:89] found id: ""
	I1205 20:33:50.512152  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.512165  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:50.512173  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:50.512247  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:50.554500  585602 cri.go:89] found id: ""
	I1205 20:33:50.554536  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.554549  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:50.554558  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:50.554625  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:50.590724  585602 cri.go:89] found id: ""
	I1205 20:33:50.590755  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.590764  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:50.590771  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:50.590837  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:50.628640  585602 cri.go:89] found id: ""
	I1205 20:33:50.628666  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.628675  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:50.628681  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:50.628732  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:50.670009  585602 cri.go:89] found id: ""
	I1205 20:33:50.670039  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.670047  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:50.670063  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:50.670075  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:50.684236  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:50.684290  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:50.757761  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:50.757790  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:50.757813  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:50.839665  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:50.839720  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:50.881087  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:50.881122  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:53.433345  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:53.446747  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:53.446819  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:53.482928  585602 cri.go:89] found id: ""
	I1205 20:33:53.482967  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.482979  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:53.482988  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:53.483048  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:53.519096  585602 cri.go:89] found id: ""
	I1205 20:33:53.519128  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.519136  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:53.519142  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:53.519196  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:53.556207  585602 cri.go:89] found id: ""
	I1205 20:33:53.556233  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.556243  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:53.556249  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:53.556346  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:53.589708  585602 cri.go:89] found id: ""
	I1205 20:33:53.589736  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.589745  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:53.589758  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:53.589813  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:53.630344  585602 cri.go:89] found id: ""
	I1205 20:33:53.630371  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.630380  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:53.630386  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:53.630438  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:53.668895  585602 cri.go:89] found id: ""
	I1205 20:33:53.668921  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.668929  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:53.668935  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:53.668987  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:53.706601  585602 cri.go:89] found id: ""
	I1205 20:33:53.706628  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.706638  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:53.706644  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:53.706704  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:53.744922  585602 cri.go:89] found id: ""
	I1205 20:33:53.744952  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.744960  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:53.744970  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:53.744989  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:53.823816  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:53.823853  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:53.823928  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:53.905075  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:53.905118  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:53.955424  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:53.955468  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:54.014871  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:54.014916  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:56.537142  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:56.550409  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:56.550478  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:56.587148  585602 cri.go:89] found id: ""
	I1205 20:33:56.587174  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.587184  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:56.587190  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:56.587249  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:56.625153  585602 cri.go:89] found id: ""
	I1205 20:33:56.625180  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.625188  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:56.625193  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:56.625243  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:56.671545  585602 cri.go:89] found id: ""
	I1205 20:33:56.671573  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.671582  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:56.671589  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:56.671652  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:56.712760  585602 cri.go:89] found id: ""
	I1205 20:33:56.712797  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.712810  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:56.712818  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:56.712890  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:56.751219  585602 cri.go:89] found id: ""
	I1205 20:33:56.751254  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.751266  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:56.751274  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:56.751340  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:56.787946  585602 cri.go:89] found id: ""
	I1205 20:33:56.787985  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.787998  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:56.788007  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:56.788101  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:56.823057  585602 cri.go:89] found id: ""
	I1205 20:33:56.823095  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.823108  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:56.823114  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:56.823170  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:56.860358  585602 cri.go:89] found id: ""
	I1205 20:33:56.860396  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.860408  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:56.860421  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:56.860438  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:56.912954  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:56.912996  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:56.927642  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:56.927691  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:57.007316  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:57.007344  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:57.007359  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:57.091471  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:57.091522  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:59.642150  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:59.656240  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:59.656324  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:59.695918  585602 cri.go:89] found id: ""
	I1205 20:33:59.695954  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.695965  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:59.695973  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:59.696037  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:59.744218  585602 cri.go:89] found id: ""
	I1205 20:33:59.744250  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.744260  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:59.744278  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:59.744340  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:59.799035  585602 cri.go:89] found id: ""
	I1205 20:33:59.799081  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.799094  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:59.799102  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:59.799172  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:59.850464  585602 cri.go:89] found id: ""
	I1205 20:33:59.850505  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.850517  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:59.850526  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:59.850590  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:59.886441  585602 cri.go:89] found id: ""
	I1205 20:33:59.886477  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.886489  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:59.886497  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:59.886564  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:59.926689  585602 cri.go:89] found id: ""
	I1205 20:33:59.926728  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.926741  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:59.926751  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:59.926821  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:59.962615  585602 cri.go:89] found id: ""
	I1205 20:33:59.962644  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.962653  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:59.962659  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:59.962716  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:00.001852  585602 cri.go:89] found id: ""
	I1205 20:34:00.001878  585602 logs.go:282] 0 containers: []
	W1205 20:34:00.001886  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:00.001897  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:00.001913  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:00.055465  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:00.055508  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:00.071904  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:00.071941  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:00.151225  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:00.151248  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:00.151262  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:00.233869  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:00.233914  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:02.776751  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:02.790868  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:02.790945  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:02.834686  585602 cri.go:89] found id: ""
	I1205 20:34:02.834719  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.834731  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:02.834740  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:02.834823  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:02.871280  585602 cri.go:89] found id: ""
	I1205 20:34:02.871313  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.871333  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:02.871342  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:02.871413  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:02.907300  585602 cri.go:89] found id: ""
	I1205 20:34:02.907336  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.907346  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:02.907352  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:02.907406  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:02.945453  585602 cri.go:89] found id: ""
	I1205 20:34:02.945487  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.945499  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:02.945511  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:02.945587  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:02.980528  585602 cri.go:89] found id: ""
	I1205 20:34:02.980561  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.980573  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:02.980580  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:02.980653  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:03.016919  585602 cri.go:89] found id: ""
	I1205 20:34:03.016946  585602 logs.go:282] 0 containers: []
	W1205 20:34:03.016955  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:03.016961  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:03.017012  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:03.053541  585602 cri.go:89] found id: ""
	I1205 20:34:03.053575  585602 logs.go:282] 0 containers: []
	W1205 20:34:03.053588  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:03.053596  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:03.053655  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:03.089907  585602 cri.go:89] found id: ""
	I1205 20:34:03.089946  585602 logs.go:282] 0 containers: []
	W1205 20:34:03.089959  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:03.089974  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:03.089991  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:03.144663  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:03.144700  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:03.160101  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:03.160140  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:03.231559  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:03.231583  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:03.231600  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:03.313226  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:03.313271  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:05.855538  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:05.869019  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:05.869120  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:05.906879  585602 cri.go:89] found id: ""
	I1205 20:34:05.906910  585602 logs.go:282] 0 containers: []
	W1205 20:34:05.906921  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:05.906928  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:05.906994  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:05.946846  585602 cri.go:89] found id: ""
	I1205 20:34:05.946881  585602 logs.go:282] 0 containers: []
	W1205 20:34:05.946893  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:05.946900  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:05.946968  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:05.984067  585602 cri.go:89] found id: ""
	I1205 20:34:05.984104  585602 logs.go:282] 0 containers: []
	W1205 20:34:05.984118  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:05.984127  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:05.984193  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:06.024984  585602 cri.go:89] found id: ""
	I1205 20:34:06.025014  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.025023  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:06.025029  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:06.025091  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:06.064766  585602 cri.go:89] found id: ""
	I1205 20:34:06.064794  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.064806  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:06.064821  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:06.064877  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:06.105652  585602 cri.go:89] found id: ""
	I1205 20:34:06.105683  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.105691  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:06.105698  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:06.105748  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:06.143732  585602 cri.go:89] found id: ""
	I1205 20:34:06.143762  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.143773  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:06.143781  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:06.143857  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:06.183397  585602 cri.go:89] found id: ""
	I1205 20:34:06.183429  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.183439  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:06.183449  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:06.183462  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:06.236403  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:06.236449  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:06.250728  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:06.250759  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:06.320983  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:06.321009  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:06.321025  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:06.408037  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:06.408084  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:08.955959  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:08.968956  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:08.969037  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:09.002804  585602 cri.go:89] found id: ""
	I1205 20:34:09.002846  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.002859  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:09.002866  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:09.002935  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:09.039098  585602 cri.go:89] found id: ""
	I1205 20:34:09.039191  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.039210  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:09.039220  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:09.039291  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:09.074727  585602 cri.go:89] found id: ""
	I1205 20:34:09.074764  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.074776  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:09.074792  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:09.074861  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:09.112650  585602 cri.go:89] found id: ""
	I1205 20:34:09.112682  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.112692  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:09.112698  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:09.112754  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:09.149301  585602 cri.go:89] found id: ""
	I1205 20:34:09.149346  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.149359  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:09.149368  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:09.149432  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:09.190288  585602 cri.go:89] found id: ""
	I1205 20:34:09.190317  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.190329  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:09.190338  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:09.190404  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:09.225311  585602 cri.go:89] found id: ""
	I1205 20:34:09.225348  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.225361  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:09.225369  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:09.225435  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:09.261023  585602 cri.go:89] found id: ""
	I1205 20:34:09.261052  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.261063  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:09.261075  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:09.261092  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:09.313733  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:09.313785  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:09.329567  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:09.329619  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:09.403397  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:09.403430  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:09.403447  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:09.486586  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:09.486630  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:12.028110  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:12.041802  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:12.041866  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:12.080349  585602 cri.go:89] found id: ""
	I1205 20:34:12.080388  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.080402  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:12.080410  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:12.080475  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:12.121455  585602 cri.go:89] found id: ""
	I1205 20:34:12.121486  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.121499  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:12.121507  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:12.121567  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:12.157743  585602 cri.go:89] found id: ""
	I1205 20:34:12.157768  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.157785  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:12.157794  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:12.157855  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:12.196901  585602 cri.go:89] found id: ""
	I1205 20:34:12.196933  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.196946  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:12.196954  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:12.197024  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:12.234471  585602 cri.go:89] found id: ""
	I1205 20:34:12.234500  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.234508  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:12.234516  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:12.234585  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:12.269238  585602 cri.go:89] found id: ""
	I1205 20:34:12.269263  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.269271  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:12.269278  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:12.269340  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:12.307965  585602 cri.go:89] found id: ""
	I1205 20:34:12.308006  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.308016  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:12.308022  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:12.308081  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:12.343463  585602 cri.go:89] found id: ""
	I1205 20:34:12.343497  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.343510  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:12.343536  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:12.343574  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:12.393393  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:12.393437  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:12.407991  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:12.408025  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:12.477868  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:12.477910  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:12.477924  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:12.557274  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:12.557315  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:15.102587  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:15.115734  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:15.115808  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:15.153057  585602 cri.go:89] found id: ""
	I1205 20:34:15.153091  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.153105  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:15.153113  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:15.153182  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:15.192762  585602 cri.go:89] found id: ""
	I1205 20:34:15.192815  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.192825  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:15.192831  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:15.192887  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:15.231330  585602 cri.go:89] found id: ""
	I1205 20:34:15.231364  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.231374  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:15.231380  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:15.231435  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:15.265229  585602 cri.go:89] found id: ""
	I1205 20:34:15.265262  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.265271  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:15.265278  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:15.265350  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:15.299596  585602 cri.go:89] found id: ""
	I1205 20:34:15.299624  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.299634  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:15.299640  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:15.299699  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:15.336155  585602 cri.go:89] found id: ""
	I1205 20:34:15.336187  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.336195  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:15.336202  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:15.336256  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:15.371867  585602 cri.go:89] found id: ""
	I1205 20:34:15.371899  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.371909  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:15.371920  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:15.371976  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:15.408536  585602 cri.go:89] found id: ""
	I1205 20:34:15.408566  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.408580  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:15.408592  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:15.408609  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:15.422499  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:15.422538  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:15.495096  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:15.495131  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:15.495145  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:15.571411  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:15.571461  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:15.612284  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:15.612319  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:18.168869  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:18.184247  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:18.184370  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:18.226078  585602 cri.go:89] found id: ""
	I1205 20:34:18.226112  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.226124  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:18.226133  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:18.226202  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:18.266221  585602 cri.go:89] found id: ""
	I1205 20:34:18.266258  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.266270  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:18.266278  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:18.266349  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:18.305876  585602 cri.go:89] found id: ""
	I1205 20:34:18.305903  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.305912  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:18.305921  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:18.305971  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:18.342044  585602 cri.go:89] found id: ""
	I1205 20:34:18.342077  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.342089  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:18.342098  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:18.342160  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:18.380240  585602 cri.go:89] found id: ""
	I1205 20:34:18.380290  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.380301  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:18.380310  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:18.380372  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:18.416228  585602 cri.go:89] found id: ""
	I1205 20:34:18.416258  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.416301  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:18.416311  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:18.416380  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:18.453368  585602 cri.go:89] found id: ""
	I1205 20:34:18.453407  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.453420  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:18.453429  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:18.453513  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:18.491689  585602 cri.go:89] found id: ""
	I1205 20:34:18.491727  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.491739  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:18.491754  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:18.491779  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:18.546614  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:18.546652  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:18.560516  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:18.560547  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:18.637544  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:18.637568  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:18.637582  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:18.720410  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:18.720453  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:21.261494  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:21.276378  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:21.276473  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:21.317571  585602 cri.go:89] found id: ""
	I1205 20:34:21.317602  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.317610  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:21.317617  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:21.317670  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:21.355174  585602 cri.go:89] found id: ""
	I1205 20:34:21.355202  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.355210  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:21.355217  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:21.355277  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:21.393259  585602 cri.go:89] found id: ""
	I1205 20:34:21.393297  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.393310  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:21.393317  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:21.393408  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:21.432286  585602 cri.go:89] found id: ""
	I1205 20:34:21.432329  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.432341  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:21.432348  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:21.432415  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:21.469844  585602 cri.go:89] found id: ""
	I1205 20:34:21.469877  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.469888  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:21.469896  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:21.469964  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:21.508467  585602 cri.go:89] found id: ""
	I1205 20:34:21.508507  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.508519  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:21.508528  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:21.508592  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:21.553053  585602 cri.go:89] found id: ""
	I1205 20:34:21.553185  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.553208  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:21.553226  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:21.553317  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:21.590595  585602 cri.go:89] found id: ""
	I1205 20:34:21.590629  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.590640  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:21.590654  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:21.590672  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:21.649493  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:21.649546  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:21.666114  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:21.666147  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:21.742801  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:21.742828  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:21.742858  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:21.822949  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:21.823010  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:24.366575  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:24.380894  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:24.380992  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:24.416907  585602 cri.go:89] found id: ""
	I1205 20:34:24.416943  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.416956  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:24.416965  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:24.417034  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:24.453303  585602 cri.go:89] found id: ""
	I1205 20:34:24.453337  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.453349  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:24.453358  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:24.453445  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:24.496795  585602 cri.go:89] found id: ""
	I1205 20:34:24.496825  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.496833  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:24.496839  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:24.496907  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:24.539105  585602 cri.go:89] found id: ""
	I1205 20:34:24.539142  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.539154  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:24.539162  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:24.539230  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:24.576778  585602 cri.go:89] found id: ""
	I1205 20:34:24.576808  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.576816  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:24.576822  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:24.576879  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:24.617240  585602 cri.go:89] found id: ""
	I1205 20:34:24.617271  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.617280  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:24.617293  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:24.617374  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:24.659274  585602 cri.go:89] found id: ""
	I1205 20:34:24.659316  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.659330  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:24.659342  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:24.659408  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:24.701047  585602 cri.go:89] found id: ""
	I1205 20:34:24.701092  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.701105  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:24.701121  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:24.701139  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:24.741070  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:24.741115  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:24.793364  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:24.793407  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:24.807803  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:24.807839  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:24.883194  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:24.883225  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:24.883243  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:27.467460  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:27.483055  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:27.483129  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:27.523718  585602 cri.go:89] found id: ""
	I1205 20:34:27.523752  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.523763  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:27.523772  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:27.523841  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:27.562872  585602 cri.go:89] found id: ""
	I1205 20:34:27.562899  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.562908  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:27.562915  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:27.562976  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:27.601804  585602 cri.go:89] found id: ""
	I1205 20:34:27.601835  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.601845  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:27.601852  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:27.601916  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:27.640553  585602 cri.go:89] found id: ""
	I1205 20:34:27.640589  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.640599  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:27.640605  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:27.640672  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:27.680983  585602 cri.go:89] found id: ""
	I1205 20:34:27.681015  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.681027  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:27.681035  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:27.681105  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:27.720766  585602 cri.go:89] found id: ""
	I1205 20:34:27.720811  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.720821  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:27.720828  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:27.720886  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:27.761422  585602 cri.go:89] found id: ""
	I1205 20:34:27.761453  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.761466  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:27.761480  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:27.761550  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:27.799658  585602 cri.go:89] found id: ""
	I1205 20:34:27.799692  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.799705  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:27.799720  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:27.799736  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:27.851801  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:27.851845  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:27.865953  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:27.865984  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:27.941787  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:27.941824  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:27.941840  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:28.023556  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:28.023616  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:30.573267  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:30.586591  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:30.586679  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:30.629923  585602 cri.go:89] found id: ""
	I1205 20:34:30.629960  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.629974  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:30.629982  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:30.630048  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:30.667045  585602 cri.go:89] found id: ""
	I1205 20:34:30.667078  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.667090  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:30.667098  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:30.667167  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:30.704479  585602 cri.go:89] found id: ""
	I1205 20:34:30.704510  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.704522  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:30.704530  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:30.704620  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:30.746035  585602 cri.go:89] found id: ""
	I1205 20:34:30.746065  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.746077  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:30.746085  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:30.746161  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:30.784375  585602 cri.go:89] found id: ""
	I1205 20:34:30.784415  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.784425  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:30.784431  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:30.784487  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:30.821779  585602 cri.go:89] found id: ""
	I1205 20:34:30.821811  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.821822  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:30.821831  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:30.821905  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:30.856927  585602 cri.go:89] found id: ""
	I1205 20:34:30.856963  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.856976  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:30.856984  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:30.857088  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:30.895852  585602 cri.go:89] found id: ""
	I1205 20:34:30.895882  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.895894  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:30.895914  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:30.895930  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:30.947600  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:30.947642  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:30.962717  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:30.962753  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:31.049225  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:31.049262  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:31.049280  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:31.126806  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:31.126850  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:33.670844  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:33.685063  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:33.685160  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:33.718277  585602 cri.go:89] found id: ""
	I1205 20:34:33.718312  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.718321  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:33.718327  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:33.718378  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:33.755409  585602 cri.go:89] found id: ""
	I1205 20:34:33.755445  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.755456  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:33.755465  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:33.755542  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:33.809447  585602 cri.go:89] found id: ""
	I1205 20:34:33.809506  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.809519  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:33.809527  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:33.809599  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:33.848327  585602 cri.go:89] found id: ""
	I1205 20:34:33.848362  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.848376  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:33.848384  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:33.848444  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:33.887045  585602 cri.go:89] found id: ""
	I1205 20:34:33.887082  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.887094  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:33.887103  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:33.887178  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:33.924385  585602 cri.go:89] found id: ""
	I1205 20:34:33.924418  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.924427  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:33.924434  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:33.924499  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:33.960711  585602 cri.go:89] found id: ""
	I1205 20:34:33.960738  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.960747  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:33.960757  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:33.960808  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:33.998150  585602 cri.go:89] found id: ""
	I1205 20:34:33.998184  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.998193  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:33.998203  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:33.998215  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:34.041977  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:34.042006  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:34.095895  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:34.095940  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:34.109802  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:34.109836  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:34.185716  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:34.185740  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:34.185753  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:36.767768  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:36.782114  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:36.782201  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:36.820606  585602 cri.go:89] found id: ""
	I1205 20:34:36.820647  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.820659  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:36.820668  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:36.820736  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:36.858999  585602 cri.go:89] found id: ""
	I1205 20:34:36.859033  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.859044  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:36.859051  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:36.859117  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:36.896222  585602 cri.go:89] found id: ""
	I1205 20:34:36.896257  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.896282  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:36.896290  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:36.896352  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:36.935565  585602 cri.go:89] found id: ""
	I1205 20:34:36.935602  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.935612  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:36.935618  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:36.935671  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:36.974031  585602 cri.go:89] found id: ""
	I1205 20:34:36.974066  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.974079  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:36.974096  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:36.974166  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:37.018243  585602 cri.go:89] found id: ""
	I1205 20:34:37.018278  585602 logs.go:282] 0 containers: []
	W1205 20:34:37.018290  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:37.018300  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:37.018371  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:37.057715  585602 cri.go:89] found id: ""
	I1205 20:34:37.057742  585602 logs.go:282] 0 containers: []
	W1205 20:34:37.057750  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:37.057756  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:37.057806  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:37.099006  585602 cri.go:89] found id: ""
	I1205 20:34:37.099037  585602 logs.go:282] 0 containers: []
	W1205 20:34:37.099045  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:37.099055  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:37.099070  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:37.186218  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:37.186264  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:37.232921  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:37.232955  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:37.285539  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:37.285581  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:37.301115  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:37.301155  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:37.373249  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:39.873692  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:39.887772  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:39.887847  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:39.925558  585602 cri.go:89] found id: ""
	I1205 20:34:39.925595  585602 logs.go:282] 0 containers: []
	W1205 20:34:39.925607  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:39.925615  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:39.925684  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:39.964967  585602 cri.go:89] found id: ""
	I1205 20:34:39.964994  585602 logs.go:282] 0 containers: []
	W1205 20:34:39.965004  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:39.965011  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:39.965073  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:40.010875  585602 cri.go:89] found id: ""
	I1205 20:34:40.010911  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.010923  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:40.010930  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:40.011003  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:40.050940  585602 cri.go:89] found id: ""
	I1205 20:34:40.050970  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.050981  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:40.050990  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:40.051052  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:40.086157  585602 cri.go:89] found id: ""
	I1205 20:34:40.086197  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.086210  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:40.086219  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:40.086283  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:40.123280  585602 cri.go:89] found id: ""
	I1205 20:34:40.123321  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.123333  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:40.123344  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:40.123414  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:40.164755  585602 cri.go:89] found id: ""
	I1205 20:34:40.164784  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.164793  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:40.164800  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:40.164871  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:40.211566  585602 cri.go:89] found id: ""
	I1205 20:34:40.211595  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.211608  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:40.211621  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:40.211638  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:40.275269  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:40.275326  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:40.303724  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:40.303754  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:40.377315  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:40.377345  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:40.377360  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:40.457744  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:40.457794  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:43.000390  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:43.015220  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:43.015308  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:43.051919  585602 cri.go:89] found id: ""
	I1205 20:34:43.051946  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.051955  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:43.051961  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:43.052034  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:43.088188  585602 cri.go:89] found id: ""
	I1205 20:34:43.088230  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.088241  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:43.088249  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:43.088350  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:43.125881  585602 cri.go:89] found id: ""
	I1205 20:34:43.125910  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.125922  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:43.125930  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:43.125988  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:43.166630  585602 cri.go:89] found id: ""
	I1205 20:34:43.166657  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.166674  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:43.166682  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:43.166744  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:43.206761  585602 cri.go:89] found id: ""
	I1205 20:34:43.206791  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.206803  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:43.206810  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:43.206873  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:43.242989  585602 cri.go:89] found id: ""
	I1205 20:34:43.243017  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.243026  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:43.243033  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:43.243094  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:43.281179  585602 cri.go:89] found id: ""
	I1205 20:34:43.281208  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.281217  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:43.281223  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:43.281272  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:43.317283  585602 cri.go:89] found id: ""
	I1205 20:34:43.317314  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.317326  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:43.317347  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:43.317362  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:43.369262  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:43.369303  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:43.386137  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:43.386182  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:43.458532  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:43.458553  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:43.458566  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:43.538254  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:43.538296  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:46.083593  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:46.101024  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:46.101133  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:46.169786  585602 cri.go:89] found id: ""
	I1205 20:34:46.169817  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.169829  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:46.169838  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:46.169905  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:46.218647  585602 cri.go:89] found id: ""
	I1205 20:34:46.218689  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.218704  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:46.218713  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:46.218790  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:46.262718  585602 cri.go:89] found id: ""
	I1205 20:34:46.262749  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.262758  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:46.262764  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:46.262846  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:46.301606  585602 cri.go:89] found id: ""
	I1205 20:34:46.301638  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.301649  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:46.301656  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:46.301714  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:46.337313  585602 cri.go:89] found id: ""
	I1205 20:34:46.337347  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.337356  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:46.337362  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:46.337422  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:46.380171  585602 cri.go:89] found id: ""
	I1205 20:34:46.380201  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.380209  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:46.380215  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:46.380288  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:46.423054  585602 cri.go:89] found id: ""
	I1205 20:34:46.423089  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.423101  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:46.423109  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:46.423178  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:46.467615  585602 cri.go:89] found id: ""
	I1205 20:34:46.467647  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.467659  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:46.467673  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:46.467687  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:46.522529  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:46.522579  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:46.537146  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:46.537199  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:46.609585  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:46.609618  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:46.609637  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:46.696093  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:46.696152  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:49.238735  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:49.256406  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:49.256484  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:49.294416  585602 cri.go:89] found id: ""
	I1205 20:34:49.294449  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.294458  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:49.294467  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:49.294528  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:49.334235  585602 cri.go:89] found id: ""
	I1205 20:34:49.334268  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.334282  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:49.334290  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:49.334362  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:49.372560  585602 cri.go:89] found id: ""
	I1205 20:34:49.372637  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.372662  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:49.372674  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:49.372756  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:49.413779  585602 cri.go:89] found id: ""
	I1205 20:34:49.413813  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.413822  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:49.413829  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:49.413900  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:49.449513  585602 cri.go:89] found id: ""
	I1205 20:34:49.449543  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.449553  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:49.449560  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:49.449630  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:49.488923  585602 cri.go:89] found id: ""
	I1205 20:34:49.488961  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.488973  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:49.488982  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:49.489050  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:49.524922  585602 cri.go:89] found id: ""
	I1205 20:34:49.524959  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.524971  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:49.524980  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:49.525048  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:49.565700  585602 cri.go:89] found id: ""
	I1205 20:34:49.565735  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.565745  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:49.565756  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:49.565769  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:49.624297  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:49.624339  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:49.641424  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:49.641465  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:49.721474  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:49.721504  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:49.721517  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:49.810777  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:49.810822  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:52.354661  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:52.368481  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:52.368555  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:52.407081  585602 cri.go:89] found id: ""
	I1205 20:34:52.407110  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.407118  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:52.407125  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:52.407189  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:52.444462  585602 cri.go:89] found id: ""
	I1205 20:34:52.444489  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.444498  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:52.444505  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:52.444562  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:52.483546  585602 cri.go:89] found id: ""
	I1205 20:34:52.483573  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.483582  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:52.483595  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:52.483648  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:52.526529  585602 cri.go:89] found id: ""
	I1205 20:34:52.526567  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.526579  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:52.526587  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:52.526655  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:52.564875  585602 cri.go:89] found id: ""
	I1205 20:34:52.564904  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.564913  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:52.564919  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:52.564984  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:52.599367  585602 cri.go:89] found id: ""
	I1205 20:34:52.599397  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.599410  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:52.599419  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:52.599475  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:52.638192  585602 cri.go:89] found id: ""
	I1205 20:34:52.638233  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.638247  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:52.638255  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:52.638336  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:52.675227  585602 cri.go:89] found id: ""
	I1205 20:34:52.675264  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.675275  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:52.675287  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:52.675311  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:52.716538  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:52.716582  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:52.772121  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:52.772162  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:52.787598  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:52.787632  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:52.865380  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:52.865408  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:52.865422  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:55.449288  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:55.462386  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:55.462474  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:55.498350  585602 cri.go:89] found id: ""
	I1205 20:34:55.498382  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.498391  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:55.498397  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:55.498457  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:55.540878  585602 cri.go:89] found id: ""
	I1205 20:34:55.540915  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.540929  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:55.540939  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:55.541022  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:55.577248  585602 cri.go:89] found id: ""
	I1205 20:34:55.577277  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.577288  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:55.577294  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:55.577375  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:55.615258  585602 cri.go:89] found id: ""
	I1205 20:34:55.615287  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.615308  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:55.615316  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:55.615384  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:55.652102  585602 cri.go:89] found id: ""
	I1205 20:34:55.652136  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.652147  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:55.652157  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:55.652228  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:55.689353  585602 cri.go:89] found id: ""
	I1205 20:34:55.689387  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.689399  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:55.689408  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:55.689486  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:55.727603  585602 cri.go:89] found id: ""
	I1205 20:34:55.727634  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.727648  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:55.727657  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:55.727729  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:55.765103  585602 cri.go:89] found id: ""
	I1205 20:34:55.765134  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.765143  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:55.765156  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:55.765169  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:55.823878  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:55.823923  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:55.838966  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:55.839001  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:55.909385  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:55.909412  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:55.909424  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:55.992036  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:55.992080  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:58.537231  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:58.552307  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:58.552392  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:58.589150  585602 cri.go:89] found id: ""
	I1205 20:34:58.589184  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.589200  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:58.589206  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:58.589272  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:58.630344  585602 cri.go:89] found id: ""
	I1205 20:34:58.630370  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.630378  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:58.630385  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:58.630452  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:58.669953  585602 cri.go:89] found id: ""
	I1205 20:34:58.669981  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.669991  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:58.669999  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:58.670055  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:58.708532  585602 cri.go:89] found id: ""
	I1205 20:34:58.708562  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.708570  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:58.708577  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:58.708631  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:58.745944  585602 cri.go:89] found id: ""
	I1205 20:34:58.745975  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.745986  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:58.745994  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:58.746051  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:58.787177  585602 cri.go:89] found id: ""
	I1205 20:34:58.787206  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.787214  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:58.787221  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:58.787272  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:58.822084  585602 cri.go:89] found id: ""
	I1205 20:34:58.822123  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.822134  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:58.822142  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:58.822210  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:58.858608  585602 cri.go:89] found id: ""
	I1205 20:34:58.858645  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.858657  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:58.858670  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:58.858691  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:58.873289  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:58.873322  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:58.947855  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:58.947884  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:58.947900  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:59.028348  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:59.028397  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:59.069172  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:59.069206  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:01.623309  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:01.637362  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:01.637449  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:01.678867  585602 cri.go:89] found id: ""
	I1205 20:35:01.678907  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.678919  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:01.678928  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:01.679001  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:01.715333  585602 cri.go:89] found id: ""
	I1205 20:35:01.715364  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.715372  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:01.715379  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:01.715439  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:01.754247  585602 cri.go:89] found id: ""
	I1205 20:35:01.754277  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.754286  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:01.754292  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:01.754348  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:01.791922  585602 cri.go:89] found id: ""
	I1205 20:35:01.791957  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.791968  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:01.791977  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:01.792045  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:01.827261  585602 cri.go:89] found id: ""
	I1205 20:35:01.827294  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.827307  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:01.827315  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:01.827389  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:01.864205  585602 cri.go:89] found id: ""
	I1205 20:35:01.864234  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.864243  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:01.864249  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:01.864332  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:01.902740  585602 cri.go:89] found id: ""
	I1205 20:35:01.902773  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.902783  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:01.902789  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:01.902857  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:01.941627  585602 cri.go:89] found id: ""
	I1205 20:35:01.941657  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.941666  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:01.941677  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:01.941690  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:01.995743  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:01.995791  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:02.010327  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:02.010368  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:02.086879  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:02.086907  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:02.086921  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:02.166500  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:02.166538  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:04.716638  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:04.730922  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:04.730992  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:04.768492  585602 cri.go:89] found id: ""
	I1205 20:35:04.768524  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.768534  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:04.768540  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:04.768606  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:04.803740  585602 cri.go:89] found id: ""
	I1205 20:35:04.803776  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.803789  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:04.803797  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:04.803866  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:04.840907  585602 cri.go:89] found id: ""
	I1205 20:35:04.840947  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.840960  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:04.840968  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:04.841036  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:04.875901  585602 cri.go:89] found id: ""
	I1205 20:35:04.875933  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.875943  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:04.875949  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:04.876003  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:04.913581  585602 cri.go:89] found id: ""
	I1205 20:35:04.913617  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.913627  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:04.913634  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:04.913689  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:04.952460  585602 cri.go:89] found id: ""
	I1205 20:35:04.952504  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.952519  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:04.952528  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:04.952617  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:04.989939  585602 cri.go:89] found id: ""
	I1205 20:35:04.989968  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.989979  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:04.989985  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:04.990041  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:05.025017  585602 cri.go:89] found id: ""
	I1205 20:35:05.025052  585602 logs.go:282] 0 containers: []
	W1205 20:35:05.025066  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:05.025078  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:05.025094  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:05.068179  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:05.068223  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:05.127311  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:05.127369  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:05.141092  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:05.141129  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:05.217648  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:05.217678  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:05.217691  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:07.793457  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:07.808710  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:07.808778  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:07.846331  585602 cri.go:89] found id: ""
	I1205 20:35:07.846366  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.846380  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:07.846389  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:07.846462  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:07.881185  585602 cri.go:89] found id: ""
	I1205 20:35:07.881222  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.881236  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:07.881243  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:07.881307  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:07.918463  585602 cri.go:89] found id: ""
	I1205 20:35:07.918501  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.918514  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:07.918522  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:07.918589  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:07.956329  585602 cri.go:89] found id: ""
	I1205 20:35:07.956364  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.956375  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:07.956385  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:07.956456  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:07.992173  585602 cri.go:89] found id: ""
	I1205 20:35:07.992212  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.992222  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:07.992229  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:07.992318  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:08.030183  585602 cri.go:89] found id: ""
	I1205 20:35:08.030214  585602 logs.go:282] 0 containers: []
	W1205 20:35:08.030226  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:08.030235  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:08.030309  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:08.072320  585602 cri.go:89] found id: ""
	I1205 20:35:08.072362  585602 logs.go:282] 0 containers: []
	W1205 20:35:08.072374  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:08.072382  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:08.072452  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:08.124220  585602 cri.go:89] found id: ""
	I1205 20:35:08.124253  585602 logs.go:282] 0 containers: []
	W1205 20:35:08.124277  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:08.124292  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:08.124310  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:08.171023  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:08.171057  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:08.237645  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:08.237699  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:08.252708  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:08.252744  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:08.343107  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:08.343140  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:08.343158  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:10.919646  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:10.934494  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:10.934562  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:10.971816  585602 cri.go:89] found id: ""
	I1205 20:35:10.971855  585602 logs.go:282] 0 containers: []
	W1205 20:35:10.971868  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:10.971878  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:10.971950  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:11.010031  585602 cri.go:89] found id: ""
	I1205 20:35:11.010071  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.010084  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:11.010095  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:11.010170  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:11.046520  585602 cri.go:89] found id: ""
	I1205 20:35:11.046552  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.046561  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:11.046568  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:11.046632  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:11.081385  585602 cri.go:89] found id: ""
	I1205 20:35:11.081426  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.081440  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:11.081448  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:11.081522  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:11.122529  585602 cri.go:89] found id: ""
	I1205 20:35:11.122559  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.122568  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:11.122576  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:11.122656  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:11.161684  585602 cri.go:89] found id: ""
	I1205 20:35:11.161767  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.161788  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:11.161797  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:11.161862  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:11.199796  585602 cri.go:89] found id: ""
	I1205 20:35:11.199824  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.199833  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:11.199842  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:11.199916  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:11.235580  585602 cri.go:89] found id: ""
	I1205 20:35:11.235617  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.235625  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:11.235635  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:11.235647  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:11.291005  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:11.291055  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:11.305902  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:11.305947  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:11.375862  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:11.375894  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:11.375915  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:11.456701  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:11.456746  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:14.006509  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:14.020437  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:14.020531  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:14.056878  585602 cri.go:89] found id: ""
	I1205 20:35:14.056905  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.056915  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:14.056923  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:14.056993  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:14.091747  585602 cri.go:89] found id: ""
	I1205 20:35:14.091782  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.091792  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:14.091800  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:14.091860  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:14.131409  585602 cri.go:89] found id: ""
	I1205 20:35:14.131440  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.131453  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:14.131461  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:14.131532  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:14.170726  585602 cri.go:89] found id: ""
	I1205 20:35:14.170754  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.170765  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:14.170773  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:14.170851  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:14.208619  585602 cri.go:89] found id: ""
	I1205 20:35:14.208654  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.208666  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:14.208674  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:14.208747  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:14.247734  585602 cri.go:89] found id: ""
	I1205 20:35:14.247771  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.247784  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:14.247793  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:14.247855  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:14.296090  585602 cri.go:89] found id: ""
	I1205 20:35:14.296119  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.296129  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:14.296136  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:14.296205  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:14.331009  585602 cri.go:89] found id: ""
	I1205 20:35:14.331037  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.331045  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:14.331057  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:14.331070  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:14.384877  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:14.384935  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:14.400458  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:14.400507  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:14.475745  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:14.475774  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:14.475787  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:14.553150  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:14.553192  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:17.095700  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:17.109135  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:17.109215  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:17.146805  585602 cri.go:89] found id: ""
	I1205 20:35:17.146838  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.146851  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:17.146861  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:17.146919  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:17.186861  585602 cri.go:89] found id: ""
	I1205 20:35:17.186891  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.186901  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:17.186907  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:17.186960  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:17.223113  585602 cri.go:89] found id: ""
	I1205 20:35:17.223148  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.223159  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:17.223166  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:17.223238  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:17.263066  585602 cri.go:89] found id: ""
	I1205 20:35:17.263098  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.263110  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:17.263118  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:17.263187  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:17.300113  585602 cri.go:89] found id: ""
	I1205 20:35:17.300153  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.300167  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:17.300175  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:17.300237  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:17.339135  585602 cri.go:89] found id: ""
	I1205 20:35:17.339172  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.339184  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:17.339193  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:17.339260  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:17.376200  585602 cri.go:89] found id: ""
	I1205 20:35:17.376229  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.376239  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:17.376248  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:17.376354  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:17.411852  585602 cri.go:89] found id: ""
	I1205 20:35:17.411895  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.411906  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:17.411919  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:17.411948  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:17.463690  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:17.463729  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:17.478912  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:17.478946  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:17.552874  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:17.552907  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:17.552933  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:17.633621  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:17.633667  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:20.175664  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:20.191495  585602 kubeadm.go:597] duration metric: took 4m4.568774806s to restartPrimaryControlPlane
	W1205 20:35:20.191570  585602 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 20:35:20.191594  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:35:20.660014  585602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:35:20.676684  585602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:35:20.688338  585602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:35:20.699748  585602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:35:20.699770  585602 kubeadm.go:157] found existing configuration files:
	
	I1205 20:35:20.699822  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:35:20.710417  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:35:20.710497  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:35:20.722295  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:35:20.732854  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:35:20.732933  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:35:20.744242  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:35:20.754593  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:35:20.754671  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:35:20.766443  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:35:20.777087  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:35:20.777157  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:35:20.788406  585602 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:35:20.869602  585602 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 20:35:20.869778  585602 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:35:21.022417  585602 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:35:21.022558  585602 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:35:21.022715  585602 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:35:21.213817  585602 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:35:21.216995  585602 out.go:235]   - Generating certificates and keys ...
	I1205 20:35:21.217146  585602 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:35:21.217240  585602 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:35:21.217373  585602 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:35:21.217502  585602 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:35:21.217614  585602 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:35:21.217699  585602 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 20:35:21.217784  585602 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:35:21.217876  585602 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:35:21.217985  585602 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:35:21.218129  585602 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:35:21.218186  585602 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 20:35:21.218289  585602 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:35:21.337924  585602 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:35:21.464355  585602 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:35:21.709734  585602 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:35:21.837040  585602 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:35:21.860767  585602 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:35:21.860894  585602 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:35:21.860934  585602 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:35:22.002564  585602 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:35:22.004407  585602 out.go:235]   - Booting up control plane ...
	I1205 20:35:22.004560  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:35:22.009319  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:35:22.010412  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:35:22.019041  585602 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:35:22.021855  585602 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:36:02.025194  585602 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 20:36:02.025306  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:36:02.025498  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:36:07.025608  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:36:07.025922  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:36:17.026490  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:36:17.026747  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:36:37.027599  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:36:37.027910  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:37:17.029681  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:37:17.029940  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:37:17.029963  585602 kubeadm.go:310] 
	I1205 20:37:17.030022  585602 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 20:37:17.030101  585602 kubeadm.go:310] 		timed out waiting for the condition
	I1205 20:37:17.030128  585602 kubeadm.go:310] 
	I1205 20:37:17.030167  585602 kubeadm.go:310] 	This error is likely caused by:
	I1205 20:37:17.030209  585602 kubeadm.go:310] 		- The kubelet is not running
	I1205 20:37:17.030353  585602 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 20:37:17.030369  585602 kubeadm.go:310] 
	I1205 20:37:17.030489  585602 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 20:37:17.030540  585602 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 20:37:17.030584  585602 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 20:37:17.030594  585602 kubeadm.go:310] 
	I1205 20:37:17.030733  585602 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 20:37:17.030843  585602 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 20:37:17.030855  585602 kubeadm.go:310] 
	I1205 20:37:17.031025  585602 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 20:37:17.031154  585602 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 20:37:17.031268  585602 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 20:37:17.031374  585602 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 20:37:17.031386  585602 kubeadm.go:310] 
	I1205 20:37:17.032368  585602 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:37:17.032493  585602 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 20:37:17.032562  585602 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1205 20:37:17.032709  585602 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 20:37:17.032762  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:37:17.518572  585602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:37:17.533868  585602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:37:17.547199  585602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:37:17.547224  585602 kubeadm.go:157] found existing configuration files:
	
	I1205 20:37:17.547272  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:37:17.556733  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:37:17.556801  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:37:17.566622  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:37:17.577044  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:37:17.577121  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:37:17.588726  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:37:17.599269  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:37:17.599346  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:37:17.609243  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:37:17.618947  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:37:17.619034  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:37:17.629228  585602 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:37:17.878785  585602 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:39:13.972213  585602 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 20:39:13.972379  585602 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1205 20:39:13.973936  585602 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 20:39:13.974035  585602 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:39:13.974150  585602 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:39:13.974251  585602 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:39:13.974341  585602 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:39:13.974404  585602 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:39:13.976164  585602 out.go:235]   - Generating certificates and keys ...
	I1205 20:39:13.976248  585602 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:39:13.976339  585602 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:39:13.976449  585602 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:39:13.976538  585602 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:39:13.976642  585602 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:39:13.976736  585602 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 20:39:13.976832  585602 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:39:13.976924  585602 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:39:13.977025  585602 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:39:13.977131  585602 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:39:13.977189  585602 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 20:39:13.977272  585602 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:39:13.977389  585602 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:39:13.977474  585602 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:39:13.977566  585602 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:39:13.977650  585602 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:39:13.977776  585602 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:39:13.977901  585602 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:39:13.977976  585602 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:39:13.978137  585602 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:39:13.979473  585602 out.go:235]   - Booting up control plane ...
	I1205 20:39:13.979581  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:39:13.979664  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:39:13.979732  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:39:13.979803  585602 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:39:13.979952  585602 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:39:13.980017  585602 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 20:39:13.980107  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.980396  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.980511  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.980744  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.980843  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.981116  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.981227  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.981439  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.981528  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.981718  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.981731  585602 kubeadm.go:310] 
	I1205 20:39:13.981773  585602 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 20:39:13.981831  585602 kubeadm.go:310] 		timed out waiting for the condition
	I1205 20:39:13.981839  585602 kubeadm.go:310] 
	I1205 20:39:13.981888  585602 kubeadm.go:310] 	This error is likely caused by:
	I1205 20:39:13.981941  585602 kubeadm.go:310] 		- The kubelet is not running
	I1205 20:39:13.982052  585602 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 20:39:13.982059  585602 kubeadm.go:310] 
	I1205 20:39:13.982144  585602 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 20:39:13.982174  585602 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 20:39:13.982208  585602 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 20:39:13.982215  585602 kubeadm.go:310] 
	I1205 20:39:13.982302  585602 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 20:39:13.982415  585602 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 20:39:13.982431  585602 kubeadm.go:310] 
	I1205 20:39:13.982540  585602 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 20:39:13.982618  585602 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 20:39:13.982701  585602 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 20:39:13.982766  585602 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 20:39:13.982839  585602 kubeadm.go:310] 
	I1205 20:39:13.982855  585602 kubeadm.go:394] duration metric: took 7m58.414377536s to StartCluster
	I1205 20:39:13.982907  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:39:13.982975  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:39:14.031730  585602 cri.go:89] found id: ""
	I1205 20:39:14.031767  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.031779  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:39:14.031791  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:39:14.031865  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:39:14.068372  585602 cri.go:89] found id: ""
	I1205 20:39:14.068420  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.068433  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:39:14.068440  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:39:14.068512  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:39:14.106807  585602 cri.go:89] found id: ""
	I1205 20:39:14.106837  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.106847  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:39:14.106856  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:39:14.106930  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:39:14.144926  585602 cri.go:89] found id: ""
	I1205 20:39:14.144952  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.144960  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:39:14.144974  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:39:14.145052  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:39:14.182712  585602 cri.go:89] found id: ""
	I1205 20:39:14.182742  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.182754  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:39:14.182762  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:39:14.182826  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:39:14.220469  585602 cri.go:89] found id: ""
	I1205 20:39:14.220505  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.220519  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:39:14.220527  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:39:14.220593  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:39:14.269791  585602 cri.go:89] found id: ""
	I1205 20:39:14.269823  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.269835  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:39:14.269842  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:39:14.269911  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:39:14.313406  585602 cri.go:89] found id: ""
	I1205 20:39:14.313439  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.313450  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:39:14.313464  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:39:14.313483  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:39:14.330488  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:39:14.330526  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:39:14.417358  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:39:14.417403  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:39:14.417421  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:39:14.530226  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:39:14.530270  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:39:14.585471  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:39:14.585512  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 20:39:14.636389  585602 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1205 20:39:14.636456  585602 out.go:270] * 
	* 
	W1205 20:39:14.636535  585602 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 20:39:14.636549  585602 out.go:270] * 
	* 
	W1205 20:39:14.637475  585602 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 20:39:14.640654  585602 out.go:201] 
	W1205 20:39:14.641873  585602 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 20:39:14.641931  585602 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 20:39:14.641975  585602 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 20:39:14.643389  585602 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-386085 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386085 -n old-k8s-version-386085
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386085 -n old-k8s-version-386085: exit status 2 (251.989095ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-386085 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-386085 logs -n 25: (1.589983178s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-790679 -- sudo                         | cert-options-790679          | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:21 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-790679                                 | cert-options-790679          | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:21 UTC |
	| start   | -p no-preload-816185                                   | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-886958                           | kubernetes-upgrade-886958    | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:21 UTC |
	| start   | -p embed-certs-789000                                  | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-816185             | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-816185                                   | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-789000            | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-789000                                  | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-315387                              | cert-expiration-315387       | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-315387                              | cert-expiration-315387       | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	| delete  | -p                                                     | disable-driver-mounts-242147 | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	|         | disable-driver-mounts-242147                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:25 UTC |
	|         | default-k8s-diff-port-942599                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-386085        | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-942599  | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC | 05 Dec 24 20:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC |                     |
	|         | default-k8s-diff-port-942599                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-816185                  | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-789000                 | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-816185                                   | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC | 05 Dec 24 20:37 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-789000                                  | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC | 05 Dec 24 20:35 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-386085                              | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-386085             | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-386085                              | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-942599       | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:28 UTC | 05 Dec 24 20:36 UTC |
	|         | default-k8s-diff-port-942599                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 20:28:03
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:28:03.038037  585929 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:28:03.038168  585929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:28:03.038178  585929 out.go:358] Setting ErrFile to fd 2...
	I1205 20:28:03.038185  585929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:28:03.038375  585929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 20:28:03.038955  585929 out.go:352] Setting JSON to false
	I1205 20:28:03.039948  585929 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":11429,"bootTime":1733419054,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:28:03.040015  585929 start.go:139] virtualization: kvm guest
	I1205 20:28:03.042326  585929 out.go:177] * [default-k8s-diff-port-942599] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:28:03.044291  585929 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 20:28:03.044320  585929 notify.go:220] Checking for updates...
	I1205 20:28:03.047072  585929 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:28:03.048480  585929 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:28:03.049796  585929 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 20:28:03.051035  585929 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:28:03.052263  585929 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:28:03.054167  585929 config.go:182] Loaded profile config "default-k8s-diff-port-942599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:28:03.054665  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:28:03.054749  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:28:03.070361  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33501
	I1205 20:28:03.070891  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:28:03.071534  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:28:03.071563  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:28:03.071995  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:28:03.072285  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:28:03.072587  585929 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:28:03.072920  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:28:03.072968  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:28:03.088186  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38669
	I1205 20:28:03.088660  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:28:03.089202  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:28:03.089224  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:28:03.089542  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:28:03.089782  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:28:03.122562  585929 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 20:28:03.123970  585929 start.go:297] selected driver: kvm2
	I1205 20:28:03.123992  585929 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-942599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-942599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.96 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:28:03.124128  585929 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:28:03.125014  585929 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:28:03.125111  585929 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20052-530897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:28:03.140461  585929 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 20:28:03.140904  585929 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:28:03.140943  585929 cni.go:84] Creating CNI manager for ""
	I1205 20:28:03.141015  585929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:28:03.141067  585929 start.go:340] cluster config:
	{Name:default-k8s-diff-port-942599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-942599 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.96 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:28:03.141179  585929 iso.go:125] acquiring lock: {Name:mk778929df466edaca8cb6d38427acedfae32b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:28:03.144215  585929 out.go:177] * Starting "default-k8s-diff-port-942599" primary control-plane node in "default-k8s-diff-port-942599" cluster
	I1205 20:28:03.276565  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:03.145620  585929 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:28:03.145661  585929 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 20:28:03.145676  585929 cache.go:56] Caching tarball of preloaded images
	I1205 20:28:03.145844  585929 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:28:03.145864  585929 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 20:28:03.146005  585929 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/config.json ...
	I1205 20:28:03.146240  585929 start.go:360] acquireMachinesLock for default-k8s-diff-port-942599: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:28:06.348547  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:12.428620  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:15.500614  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:21.580587  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:24.652618  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:30.732598  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:33.804612  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:39.884624  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:42.956577  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:49.036617  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:52.108607  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:58.188605  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:01.260573  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:07.340591  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:10.412578  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:16.492574  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:19.564578  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:25.644591  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:28.716619  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:34.796609  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:37.868605  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:43.948594  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:47.020553  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:53.100499  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:56.172560  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:02.252612  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:05.324648  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:11.404563  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:14.476553  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:20.556568  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:23.561620  585113 start.go:364] duration metric: took 4m32.790399884s to acquireMachinesLock for "embed-certs-789000"
	I1205 20:30:23.561696  585113 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:30:23.561711  585113 fix.go:54] fixHost starting: 
	I1205 20:30:23.562327  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:30:23.562400  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:30:23.578260  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38555
	I1205 20:30:23.578843  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:30:23.579379  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:30:23.579405  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:30:23.579776  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:30:23.580051  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:23.580222  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetState
	I1205 20:30:23.582161  585113 fix.go:112] recreateIfNeeded on embed-certs-789000: state=Stopped err=<nil>
	I1205 20:30:23.582190  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	W1205 20:30:23.582386  585113 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 20:30:23.584585  585113 out.go:177] * Restarting existing kvm2 VM for "embed-certs-789000" ...
	I1205 20:30:23.586583  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Start
	I1205 20:30:23.586835  585113 main.go:141] libmachine: (embed-certs-789000) Ensuring networks are active...
	I1205 20:30:23.587628  585113 main.go:141] libmachine: (embed-certs-789000) Ensuring network default is active
	I1205 20:30:23.587937  585113 main.go:141] libmachine: (embed-certs-789000) Ensuring network mk-embed-certs-789000 is active
	I1205 20:30:23.588228  585113 main.go:141] libmachine: (embed-certs-789000) Getting domain xml...
	I1205 20:30:23.588898  585113 main.go:141] libmachine: (embed-certs-789000) Creating domain...
	I1205 20:30:24.829936  585113 main.go:141] libmachine: (embed-certs-789000) Waiting to get IP...
	I1205 20:30:24.830897  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:24.831398  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:24.831465  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:24.831364  586433 retry.go:31] will retry after 208.795355ms: waiting for machine to come up
	I1205 20:30:25.042078  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:25.042657  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:25.042689  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:25.042599  586433 retry.go:31] will retry after 385.313968ms: waiting for machine to come up
	I1205 20:30:25.429439  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:25.429877  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:25.429913  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:25.429811  586433 retry.go:31] will retry after 432.591358ms: waiting for machine to come up
	I1205 20:30:23.558453  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:30:23.558508  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetMachineName
	I1205 20:30:23.558905  585025 buildroot.go:166] provisioning hostname "no-preload-816185"
	I1205 20:30:23.558943  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetMachineName
	I1205 20:30:23.559166  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:30:23.561471  585025 machine.go:96] duration metric: took 4m37.380964872s to provisionDockerMachine
	I1205 20:30:23.561518  585025 fix.go:56] duration metric: took 4m37.403172024s for fixHost
	I1205 20:30:23.561524  585025 start.go:83] releasing machines lock for "no-preload-816185", held for 4m37.40319095s
	W1205 20:30:23.561546  585025 start.go:714] error starting host: provision: host is not running
	W1205 20:30:23.561677  585025 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1205 20:30:23.561688  585025 start.go:729] Will try again in 5 seconds ...
	I1205 20:30:25.864656  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:25.865217  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:25.865255  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:25.865138  586433 retry.go:31] will retry after 571.148349ms: waiting for machine to come up
	I1205 20:30:26.437644  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:26.438220  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:26.438250  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:26.438165  586433 retry.go:31] will retry after 585.234455ms: waiting for machine to come up
	I1205 20:30:27.025107  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:27.025510  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:27.025538  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:27.025459  586433 retry.go:31] will retry after 648.291531ms: waiting for machine to come up
	I1205 20:30:27.675457  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:27.675898  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:27.675928  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:27.675838  586433 retry.go:31] will retry after 804.071148ms: waiting for machine to come up
	I1205 20:30:28.481966  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:28.482386  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:28.482416  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:28.482329  586433 retry.go:31] will retry after 905.207403ms: waiting for machine to come up
	I1205 20:30:29.388933  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:29.389546  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:29.389571  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:29.389484  586433 retry.go:31] will retry after 1.48894232s: waiting for machine to come up
	I1205 20:30:28.562678  585025 start.go:360] acquireMachinesLock for no-preload-816185: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:30:30.880218  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:30.880742  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:30.880773  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:30.880685  586433 retry.go:31] will retry after 2.314200549s: waiting for machine to come up
	I1205 20:30:33.198477  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:33.198998  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:33.199029  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:33.198945  586433 retry.go:31] will retry after 1.922541264s: waiting for machine to come up
	I1205 20:30:35.123922  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:35.124579  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:35.124607  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:35.124524  586433 retry.go:31] will retry after 3.537087912s: waiting for machine to come up
	I1205 20:30:38.662839  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:38.663212  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:38.663250  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:38.663160  586433 retry.go:31] will retry after 3.371938424s: waiting for machine to come up
	I1205 20:30:43.457332  585602 start.go:364] duration metric: took 3m31.488905557s to acquireMachinesLock for "old-k8s-version-386085"
	I1205 20:30:43.457418  585602 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:30:43.457427  585602 fix.go:54] fixHost starting: 
	I1205 20:30:43.457835  585602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:30:43.457891  585602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:30:43.474845  585602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33571
	I1205 20:30:43.475386  585602 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:30:43.475993  585602 main.go:141] libmachine: Using API Version  1
	I1205 20:30:43.476026  585602 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:30:43.476404  585602 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:30:43.476613  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:30:43.476778  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetState
	I1205 20:30:43.478300  585602 fix.go:112] recreateIfNeeded on old-k8s-version-386085: state=Stopped err=<nil>
	I1205 20:30:43.478329  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	W1205 20:30:43.478502  585602 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 20:30:43.480644  585602 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-386085" ...
	I1205 20:30:42.038738  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.039204  585113 main.go:141] libmachine: (embed-certs-789000) Found IP for machine: 192.168.39.200
	I1205 20:30:42.039235  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has current primary IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.039244  585113 main.go:141] libmachine: (embed-certs-789000) Reserving static IP address...
	I1205 20:30:42.039760  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "embed-certs-789000", mac: "52:54:00:48:ae:b2", ip: "192.168.39.200"} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.039806  585113 main.go:141] libmachine: (embed-certs-789000) DBG | skip adding static IP to network mk-embed-certs-789000 - found existing host DHCP lease matching {name: "embed-certs-789000", mac: "52:54:00:48:ae:b2", ip: "192.168.39.200"}
	I1205 20:30:42.039819  585113 main.go:141] libmachine: (embed-certs-789000) Reserved static IP address: 192.168.39.200
	I1205 20:30:42.039835  585113 main.go:141] libmachine: (embed-certs-789000) Waiting for SSH to be available...
	I1205 20:30:42.039843  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Getting to WaitForSSH function...
	I1205 20:30:42.042013  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.042352  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.042386  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.042542  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Using SSH client type: external
	I1205 20:30:42.042562  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa (-rw-------)
	I1205 20:30:42.042586  585113 main.go:141] libmachine: (embed-certs-789000) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.200 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:30:42.042595  585113 main.go:141] libmachine: (embed-certs-789000) DBG | About to run SSH command:
	I1205 20:30:42.042603  585113 main.go:141] libmachine: (embed-certs-789000) DBG | exit 0
	I1205 20:30:42.168573  585113 main.go:141] libmachine: (embed-certs-789000) DBG | SSH cmd err, output: <nil>: 
	I1205 20:30:42.168960  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetConfigRaw
	I1205 20:30:42.169783  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetIP
	I1205 20:30:42.172396  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.172790  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.172818  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.173023  585113 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/config.json ...
	I1205 20:30:42.173214  585113 machine.go:93] provisionDockerMachine start ...
	I1205 20:30:42.173234  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:42.173465  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.175399  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.175754  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.175785  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.175885  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:42.176063  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.176208  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.176412  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:42.176583  585113 main.go:141] libmachine: Using SSH client type: native
	I1205 20:30:42.176816  585113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1205 20:30:42.176830  585113 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:30:42.280829  585113 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 20:30:42.280861  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetMachineName
	I1205 20:30:42.281135  585113 buildroot.go:166] provisioning hostname "embed-certs-789000"
	I1205 20:30:42.281168  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetMachineName
	I1205 20:30:42.281409  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.284355  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.284692  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.284723  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.284817  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:42.285019  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.285185  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.285338  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:42.285511  585113 main.go:141] libmachine: Using SSH client type: native
	I1205 20:30:42.285716  585113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1205 20:30:42.285730  585113 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-789000 && echo "embed-certs-789000" | sudo tee /etc/hostname
	I1205 20:30:42.409310  585113 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-789000
	
	I1205 20:30:42.409370  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.412182  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.412524  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.412566  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.412779  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:42.412989  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.413137  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.413278  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:42.413468  585113 main.go:141] libmachine: Using SSH client type: native
	I1205 20:30:42.413674  585113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1205 20:30:42.413690  585113 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-789000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-789000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-789000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:30:42.529773  585113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:30:42.529806  585113 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:30:42.529829  585113 buildroot.go:174] setting up certificates
	I1205 20:30:42.529841  585113 provision.go:84] configureAuth start
	I1205 20:30:42.529850  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetMachineName
	I1205 20:30:42.530201  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetIP
	I1205 20:30:42.533115  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.533527  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.533558  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.533753  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.535921  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.536310  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.536339  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.536518  585113 provision.go:143] copyHostCerts
	I1205 20:30:42.536610  585113 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:30:42.536631  585113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:30:42.536698  585113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:30:42.536793  585113 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:30:42.536802  585113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:30:42.536826  585113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:30:42.536880  585113 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:30:42.536887  585113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:30:42.536908  585113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:30:42.536956  585113 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.embed-certs-789000 san=[127.0.0.1 192.168.39.200 embed-certs-789000 localhost minikube]
	I1205 20:30:42.832543  585113 provision.go:177] copyRemoteCerts
	I1205 20:30:42.832610  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:30:42.832640  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.835403  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.835669  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.835701  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.835848  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:42.836027  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.836161  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:42.836314  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:30:42.918661  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:30:42.943903  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1205 20:30:42.968233  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:30:42.993174  585113 provision.go:87] duration metric: took 463.317149ms to configureAuth
	I1205 20:30:42.993249  585113 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:30:42.993449  585113 config.go:182] Loaded profile config "embed-certs-789000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:30:42.993554  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.996211  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.996637  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.996696  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.996841  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:42.997049  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.997196  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.997305  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:42.997458  585113 main.go:141] libmachine: Using SSH client type: native
	I1205 20:30:42.997641  585113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1205 20:30:42.997656  585113 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:30:43.220096  585113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:30:43.220127  585113 machine.go:96] duration metric: took 1.046899757s to provisionDockerMachine
	I1205 20:30:43.220141  585113 start.go:293] postStartSetup for "embed-certs-789000" (driver="kvm2")
	I1205 20:30:43.220152  585113 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:30:43.220176  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:43.220544  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:30:43.220584  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:43.223481  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.223860  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:43.223889  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.224102  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:43.224316  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:43.224483  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:43.224667  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:30:43.307878  585113 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:30:43.312875  585113 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:30:43.312905  585113 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:30:43.312981  585113 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:30:43.313058  585113 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:30:43.313169  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:30:43.323221  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:30:43.347978  585113 start.go:296] duration metric: took 127.819083ms for postStartSetup
	I1205 20:30:43.348023  585113 fix.go:56] duration metric: took 19.786318897s for fixHost
	I1205 20:30:43.348046  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:43.350639  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.351004  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:43.351026  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.351247  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:43.351478  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:43.351642  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:43.351803  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:43.351950  585113 main.go:141] libmachine: Using SSH client type: native
	I1205 20:30:43.352122  585113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1205 20:30:43.352133  585113 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:30:43.457130  585113 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430643.415370749
	
	I1205 20:30:43.457164  585113 fix.go:216] guest clock: 1733430643.415370749
	I1205 20:30:43.457176  585113 fix.go:229] Guest: 2024-12-05 20:30:43.415370749 +0000 UTC Remote: 2024-12-05 20:30:43.34802793 +0000 UTC m=+292.733798952 (delta=67.342819ms)
	I1205 20:30:43.457209  585113 fix.go:200] guest clock delta is within tolerance: 67.342819ms
	I1205 20:30:43.457217  585113 start.go:83] releasing machines lock for "embed-certs-789000", held for 19.895543311s
	I1205 20:30:43.457251  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:43.457563  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetIP
	I1205 20:30:43.460628  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.461002  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:43.461042  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.461175  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:43.461758  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:43.461937  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:43.462067  585113 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:30:43.462120  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:43.462147  585113 ssh_runner.go:195] Run: cat /version.json
	I1205 20:30:43.462169  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:43.464859  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.465147  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.465237  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:43.465264  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.465409  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:43.465472  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:43.465497  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.465589  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:43.465711  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:43.465768  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:43.465863  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:43.465907  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:30:43.466006  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:43.466129  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:30:43.568909  585113 ssh_runner.go:195] Run: systemctl --version
	I1205 20:30:43.575175  585113 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:30:43.725214  585113 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:30:43.732226  585113 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:30:43.732369  585113 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:30:43.750186  585113 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:30:43.750223  585113 start.go:495] detecting cgroup driver to use...
	I1205 20:30:43.750296  585113 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:30:43.767876  585113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:30:43.783386  585113 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:30:43.783465  585113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:30:43.799917  585113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:30:43.815607  585113 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:30:43.935150  585113 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:30:44.094292  585113 docker.go:233] disabling docker service ...
	I1205 20:30:44.094378  585113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:30:44.111307  585113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:30:44.127528  585113 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:30:44.284496  585113 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:30:44.422961  585113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:30:44.439104  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:30:44.461721  585113 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:30:44.461787  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.476398  585113 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:30:44.476463  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.489821  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.502250  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.514245  585113 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:30:44.528227  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.540205  585113 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.559447  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.571434  585113 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:30:44.583635  585113 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:30:44.583717  585113 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:30:44.600954  585113 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:30:44.613381  585113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:30:44.733592  585113 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:30:44.843948  585113 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:30:44.844036  585113 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:30:44.849215  585113 start.go:563] Will wait 60s for crictl version
	I1205 20:30:44.849275  585113 ssh_runner.go:195] Run: which crictl
	I1205 20:30:44.853481  585113 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:30:44.900488  585113 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:30:44.900583  585113 ssh_runner.go:195] Run: crio --version
	I1205 20:30:44.944771  585113 ssh_runner.go:195] Run: crio --version
	I1205 20:30:44.977119  585113 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:30:44.978527  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetIP
	I1205 20:30:44.981609  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:44.982001  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:44.982037  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:44.982240  585113 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:30:44.986979  585113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:30:45.001779  585113 kubeadm.go:883] updating cluster {Name:embed-certs-789000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-789000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:30:45.001935  585113 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:30:45.002021  585113 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:30:45.041827  585113 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 20:30:45.041918  585113 ssh_runner.go:195] Run: which lz4
	I1205 20:30:45.046336  585113 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 20:30:45.050804  585113 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:30:45.050852  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 20:30:43.482307  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .Start
	I1205 20:30:43.482501  585602 main.go:141] libmachine: (old-k8s-version-386085) Ensuring networks are active...
	I1205 20:30:43.483222  585602 main.go:141] libmachine: (old-k8s-version-386085) Ensuring network default is active
	I1205 20:30:43.483574  585602 main.go:141] libmachine: (old-k8s-version-386085) Ensuring network mk-old-k8s-version-386085 is active
	I1205 20:30:43.484156  585602 main.go:141] libmachine: (old-k8s-version-386085) Getting domain xml...
	I1205 20:30:43.485045  585602 main.go:141] libmachine: (old-k8s-version-386085) Creating domain...
	I1205 20:30:44.770817  585602 main.go:141] libmachine: (old-k8s-version-386085) Waiting to get IP...
	I1205 20:30:44.772079  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:44.772538  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:44.772599  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:44.772517  586577 retry.go:31] will retry after 247.056435ms: waiting for machine to come up
	I1205 20:30:45.021096  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:45.021642  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:45.021678  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:45.021560  586577 retry.go:31] will retry after 241.543543ms: waiting for machine to come up
	I1205 20:30:45.265136  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:45.265654  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:45.265683  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:45.265596  586577 retry.go:31] will retry after 324.624293ms: waiting for machine to come up
	I1205 20:30:45.592067  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:45.592603  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:45.592636  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:45.592558  586577 retry.go:31] will retry after 408.275958ms: waiting for machine to come up
	I1205 20:30:46.002321  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:46.002872  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:46.002904  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:46.002808  586577 retry.go:31] will retry after 693.356488ms: waiting for machine to come up
	I1205 20:30:46.697505  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:46.697874  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:46.697900  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:46.697846  586577 retry.go:31] will retry after 906.807324ms: waiting for machine to come up
	I1205 20:30:46.612504  585113 crio.go:462] duration metric: took 1.56620974s to copy over tarball
	I1205 20:30:46.612585  585113 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:30:48.868826  585113 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256202653s)
	I1205 20:30:48.868863  585113 crio.go:469] duration metric: took 2.256329112s to extract the tarball
	I1205 20:30:48.868873  585113 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:30:48.906872  585113 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:30:48.955442  585113 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:30:48.955468  585113 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:30:48.955477  585113 kubeadm.go:934] updating node { 192.168.39.200 8443 v1.31.2 crio true true} ...
	I1205 20:30:48.955603  585113 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-789000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-789000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:30:48.955668  585113 ssh_runner.go:195] Run: crio config
	I1205 20:30:49.007389  585113 cni.go:84] Creating CNI manager for ""
	I1205 20:30:49.007419  585113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:30:49.007433  585113 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:30:49.007473  585113 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.200 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-789000 NodeName:embed-certs-789000 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:30:49.007656  585113 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-789000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.200"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.200"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:30:49.007734  585113 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:30:49.021862  585113 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:30:49.021949  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:30:49.032937  585113 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1205 20:30:49.053311  585113 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:30:49.073636  585113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1205 20:30:49.094437  585113 ssh_runner.go:195] Run: grep 192.168.39.200	control-plane.minikube.internal$ /etc/hosts
	I1205 20:30:49.098470  585113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.200	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:30:49.112013  585113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:30:49.246312  585113 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:30:49.264250  585113 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000 for IP: 192.168.39.200
	I1205 20:30:49.264301  585113 certs.go:194] generating shared ca certs ...
	I1205 20:30:49.264329  585113 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:30:49.264565  585113 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:30:49.264627  585113 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:30:49.264641  585113 certs.go:256] generating profile certs ...
	I1205 20:30:49.264775  585113 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/client.key
	I1205 20:30:49.264854  585113 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/apiserver.key.5c723d79
	I1205 20:30:49.264894  585113 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/proxy-client.key
	I1205 20:30:49.265026  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:30:49.265094  585113 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:30:49.265109  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:30:49.265144  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:30:49.265179  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:30:49.265215  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:30:49.265258  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:30:49.266137  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:30:49.297886  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:30:49.339461  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:30:49.385855  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:30:49.427676  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1205 20:30:49.466359  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:30:49.492535  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:30:49.518311  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:30:49.543545  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:30:49.567956  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:30:49.592361  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:30:49.616245  585113 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:30:49.633947  585113 ssh_runner.go:195] Run: openssl version
	I1205 20:30:49.640353  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:30:49.652467  585113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:30:49.657353  585113 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:30:49.657440  585113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:30:49.664045  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:30:49.679941  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:30:49.695153  585113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:30:49.700397  585113 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:30:49.700458  585113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:30:49.706786  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:30:49.718994  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:30:49.731470  585113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:30:49.736654  585113 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:30:49.736725  585113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:30:49.743034  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:30:49.755334  585113 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:30:49.760378  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:30:49.766942  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:30:49.773911  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:30:49.780556  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:30:49.787004  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:30:49.793473  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:30:49.800009  585113 kubeadm.go:392] StartCluster: {Name:embed-certs-789000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-789000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:30:49.800118  585113 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:30:49.800163  585113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:30:49.844520  585113 cri.go:89] found id: ""
	I1205 20:30:49.844620  585113 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:30:49.857604  585113 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 20:30:49.857640  585113 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 20:30:49.857702  585113 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:30:49.870235  585113 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:30:49.871318  585113 kubeconfig.go:125] found "embed-certs-789000" server: "https://192.168.39.200:8443"
	I1205 20:30:49.873416  585113 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:30:49.884281  585113 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.200
	I1205 20:30:49.884331  585113 kubeadm.go:1160] stopping kube-system containers ...
	I1205 20:30:49.884348  585113 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:30:49.884410  585113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:30:49.930238  585113 cri.go:89] found id: ""
	I1205 20:30:49.930351  585113 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:30:49.947762  585113 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:30:49.957878  585113 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:30:49.957902  585113 kubeadm.go:157] found existing configuration files:
	
	I1205 20:30:49.957960  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:30:49.967261  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:30:49.967342  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:30:49.977868  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:30:49.987715  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:30:49.987777  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:30:49.998157  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:30:50.008224  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:30:50.008334  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:30:50.018748  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:30:50.028204  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:30:50.028287  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:30:50.038459  585113 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:30:50.049458  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:50.175199  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:47.606601  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:47.607065  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:47.607098  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:47.607001  586577 retry.go:31] will retry after 1.007867893s: waiting for machine to come up
	I1205 20:30:48.617140  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:48.617641  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:48.617674  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:48.617608  586577 retry.go:31] will retry after 1.15317606s: waiting for machine to come up
	I1205 20:30:49.773126  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:49.773670  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:49.773699  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:49.773620  586577 retry.go:31] will retry after 1.342422822s: waiting for machine to come up
	I1205 20:30:51.117592  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:51.118034  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:51.118065  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:51.117973  586577 retry.go:31] will retry after 1.575794078s: waiting for machine to come up
	I1205 20:30:51.203131  585113 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.027881984s)
	I1205 20:30:51.203193  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:51.415679  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:51.500984  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:51.598883  585113 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:30:51.598986  585113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:30:52.099206  585113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:30:52.599755  585113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:30:52.619189  585113 api_server.go:72] duration metric: took 1.020303049s to wait for apiserver process to appear ...
	I1205 20:30:52.619236  585113 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:30:52.619268  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:52.619903  585113 api_server.go:269] stopped: https://192.168.39.200:8443/healthz: Get "https://192.168.39.200:8443/healthz": dial tcp 192.168.39.200:8443: connect: connection refused
	I1205 20:30:53.119501  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:55.342363  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:30:55.342398  585113 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:30:55.342418  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:55.471683  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:30:55.471729  585113 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:30:55.619946  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:55.634855  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:30:55.634906  585113 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:30:56.119928  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:56.128358  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:30:56.128396  585113 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:30:56.620047  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:56.625869  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I1205 20:30:56.633658  585113 api_server.go:141] control plane version: v1.31.2
	I1205 20:30:56.633698  585113 api_server.go:131] duration metric: took 4.014451973s to wait for apiserver health ...
	I1205 20:30:56.633712  585113 cni.go:84] Creating CNI manager for ""
	I1205 20:30:56.633721  585113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:30:56.635658  585113 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:30:52.695389  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:52.695838  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:52.695868  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:52.695784  586577 retry.go:31] will retry after 2.377931285s: waiting for machine to come up
	I1205 20:30:55.076859  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:55.077428  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:55.077469  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:55.077377  586577 retry.go:31] will retry after 2.586837249s: waiting for machine to come up
	I1205 20:30:56.637276  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:30:56.649131  585113 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:30:56.670981  585113 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:30:56.682424  585113 system_pods.go:59] 8 kube-system pods found
	I1205 20:30:56.682497  585113 system_pods.go:61] "coredns-7c65d6cfc9-hrrjc" [43d8b550-f29d-4a84-a2fc-b456abc486c2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:30:56.682508  585113 system_pods.go:61] "etcd-embed-certs-789000" [99f232e4-1bc8-4f98-8bcf-8aa61d66158b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:30:56.682519  585113 system_pods.go:61] "kube-apiserver-embed-certs-789000" [d1d11749-0ddc-4172-aaa9-bca00c64c912] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:30:56.682528  585113 system_pods.go:61] "kube-controller-manager-embed-certs-789000" [b291c993-cd10-4d0f-8c3e-a6db726cf83a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:30:56.682536  585113 system_pods.go:61] "kube-proxy-h79dj" [80abe907-24e7-4001-90a6-f4d10fd9fc6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:30:56.682544  585113 system_pods.go:61] "kube-scheduler-embed-certs-789000" [490d7afa-24fd-43c8-8088-539bb7e1eb9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 20:30:56.682556  585113 system_pods.go:61] "metrics-server-6867b74b74-tlsjl" [cd1d73a4-27d1-4e68-b7d8-6da497fc4e53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:30:56.682570  585113 system_pods.go:61] "storage-provisioner" [3246e383-4f15-4222-a50c-c5b243fda12a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:30:56.682579  585113 system_pods.go:74] duration metric: took 11.566899ms to wait for pod list to return data ...
	I1205 20:30:56.682598  585113 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:30:56.687073  585113 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:30:56.687172  585113 node_conditions.go:123] node cpu capacity is 2
	I1205 20:30:56.687222  585113 node_conditions.go:105] duration metric: took 4.613225ms to run NodePressure ...
	I1205 20:30:56.687273  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:56.981686  585113 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 20:30:56.985944  585113 kubeadm.go:739] kubelet initialised
	I1205 20:30:56.985968  585113 kubeadm.go:740] duration metric: took 4.256434ms waiting for restarted kubelet to initialise ...
	I1205 20:30:56.985976  585113 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:30:56.991854  585113 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hrrjc" in "kube-system" namespace to be "Ready" ...
	I1205 20:30:58.997499  585113 pod_ready.go:103] pod "coredns-7c65d6cfc9-hrrjc" in "kube-system" namespace has status "Ready":"False"
	I1205 20:30:57.667200  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:57.667644  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:57.667681  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:57.667592  586577 retry.go:31] will retry after 2.856276116s: waiting for machine to come up
	I1205 20:31:00.525334  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:00.525796  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:31:00.525830  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:31:00.525740  586577 retry.go:31] will retry after 5.119761936s: waiting for machine to come up
	I1205 20:31:00.999102  585113 pod_ready.go:103] pod "coredns-7c65d6cfc9-hrrjc" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:01.500344  585113 pod_ready.go:93] pod "coredns-7c65d6cfc9-hrrjc" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:01.500371  585113 pod_ready.go:82] duration metric: took 4.508490852s for pod "coredns-7c65d6cfc9-hrrjc" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:01.500382  585113 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:03.506621  585113 pod_ready.go:103] pod "etcd-embed-certs-789000" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:05.007677  585113 pod_ready.go:93] pod "etcd-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:05.007703  585113 pod_ready.go:82] duration metric: took 3.507315826s for pod "etcd-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:05.007713  585113 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:05.646790  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.647230  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has current primary IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.647264  585602 main.go:141] libmachine: (old-k8s-version-386085) Found IP for machine: 192.168.72.144
	I1205 20:31:05.647278  585602 main.go:141] libmachine: (old-k8s-version-386085) Reserving static IP address...
	I1205 20:31:05.647796  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "old-k8s-version-386085", mac: "52:54:00:6a:06:a4", ip: "192.168.72.144"} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.647834  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | skip adding static IP to network mk-old-k8s-version-386085 - found existing host DHCP lease matching {name: "old-k8s-version-386085", mac: "52:54:00:6a:06:a4", ip: "192.168.72.144"}
	I1205 20:31:05.647856  585602 main.go:141] libmachine: (old-k8s-version-386085) Reserved static IP address: 192.168.72.144
	I1205 20:31:05.647872  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | Getting to WaitForSSH function...
	I1205 20:31:05.647889  585602 main.go:141] libmachine: (old-k8s-version-386085) Waiting for SSH to be available...
	I1205 20:31:05.650296  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.650610  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.650643  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.650742  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | Using SSH client type: external
	I1205 20:31:05.650779  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa (-rw-------)
	I1205 20:31:05.650816  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:31:05.650837  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | About to run SSH command:
	I1205 20:31:05.650851  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | exit 0
	I1205 20:31:05.776876  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | SSH cmd err, output: <nil>: 
	I1205 20:31:05.777311  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetConfigRaw
	I1205 20:31:05.777948  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:31:05.780609  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.781053  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.781091  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.781319  585602 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/config.json ...
	I1205 20:31:05.781585  585602 machine.go:93] provisionDockerMachine start ...
	I1205 20:31:05.781607  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:05.781942  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:05.784729  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.785155  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.785191  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.785326  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:05.785491  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:05.785659  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:05.785886  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:05.786078  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:05.786309  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:05.786323  585602 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:31:05.893034  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 20:31:05.893079  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetMachineName
	I1205 20:31:05.893388  585602 buildroot.go:166] provisioning hostname "old-k8s-version-386085"
	I1205 20:31:05.893426  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetMachineName
	I1205 20:31:05.893623  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:05.896484  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.896883  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.896910  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.897031  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:05.897252  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:05.897441  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:05.897615  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:05.897796  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:05.897965  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:05.897977  585602 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-386085 && echo "old-k8s-version-386085" | sudo tee /etc/hostname
	I1205 20:31:06.017910  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-386085
	
	I1205 20:31:06.017939  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.020956  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.021298  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.021332  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.021494  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.021678  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.021863  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.021995  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.022137  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:06.022325  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:06.022342  585602 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-386085' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-386085/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-386085' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:31:06.138200  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:31:06.138234  585602 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:31:06.138261  585602 buildroot.go:174] setting up certificates
	I1205 20:31:06.138274  585602 provision.go:84] configureAuth start
	I1205 20:31:06.138287  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetMachineName
	I1205 20:31:06.138588  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:31:06.141488  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.141909  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.141965  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.142096  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.144144  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.144720  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.144742  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.144951  585602 provision.go:143] copyHostCerts
	I1205 20:31:06.145020  585602 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:31:06.145031  585602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:31:06.145085  585602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:31:06.145206  585602 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:31:06.145219  585602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:31:06.145248  585602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:31:06.145335  585602 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:31:06.145346  585602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:31:06.145376  585602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:31:06.145452  585602 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-386085 san=[127.0.0.1 192.168.72.144 localhost minikube old-k8s-version-386085]
	I1205 20:31:06.276466  585602 provision.go:177] copyRemoteCerts
	I1205 20:31:06.276530  585602 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:31:06.276559  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.279218  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.279550  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.279578  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.279766  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.279990  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.280152  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.280317  585602 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:31:06.362479  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:31:06.387631  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1205 20:31:06.413110  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:31:06.437931  585602 provision.go:87] duration metric: took 299.641033ms to configureAuth
	I1205 20:31:06.437962  585602 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:31:06.438176  585602 config.go:182] Loaded profile config "old-k8s-version-386085": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 20:31:06.438272  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.441059  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.441413  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.441444  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.441655  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.441846  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.441992  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.442174  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.442379  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:06.442552  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:06.442568  585602 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:31:06.655666  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:31:06.655699  585602 machine.go:96] duration metric: took 874.099032ms to provisionDockerMachine
	I1205 20:31:06.655713  585602 start.go:293] postStartSetup for "old-k8s-version-386085" (driver="kvm2")
	I1205 20:31:06.655723  585602 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:31:06.655752  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.656082  585602 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:31:06.656115  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.658835  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.659178  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.659229  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.659378  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.659636  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.659808  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.659971  585602 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:31:06.744484  585602 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:31:06.749025  585602 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:31:06.749060  585602 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:31:06.749134  585602 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:31:06.749273  585602 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:31:06.749411  585602 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:31:06.760720  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:06.785449  585602 start.go:296] duration metric: took 129.720092ms for postStartSetup
	I1205 20:31:06.785500  585602 fix.go:56] duration metric: took 23.328073686s for fixHost
	I1205 20:31:06.785526  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.788417  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.788797  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.788828  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.789049  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.789296  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.789483  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.789688  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.789870  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:06.790046  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:06.790065  585602 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:31:06.897781  585929 start.go:364] duration metric: took 3m3.751494327s to acquireMachinesLock for "default-k8s-diff-port-942599"
	I1205 20:31:06.897847  585929 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:31:06.897858  585929 fix.go:54] fixHost starting: 
	I1205 20:31:06.898355  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:31:06.898419  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:31:06.916556  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40927
	I1205 20:31:06.917111  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:31:06.917648  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:31:06.917674  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:31:06.918014  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:31:06.918256  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:06.918402  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetState
	I1205 20:31:06.920077  585929 fix.go:112] recreateIfNeeded on default-k8s-diff-port-942599: state=Stopped err=<nil>
	I1205 20:31:06.920105  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	W1205 20:31:06.920257  585929 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 20:31:06.922145  585929 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-942599" ...
	I1205 20:31:06.923548  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Start
	I1205 20:31:06.923770  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Ensuring networks are active...
	I1205 20:31:06.924750  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Ensuring network default is active
	I1205 20:31:06.925240  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Ensuring network mk-default-k8s-diff-port-942599 is active
	I1205 20:31:06.925721  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Getting domain xml...
	I1205 20:31:06.926719  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Creating domain...
	I1205 20:31:06.897579  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430666.872047181
	
	I1205 20:31:06.897606  585602 fix.go:216] guest clock: 1733430666.872047181
	I1205 20:31:06.897615  585602 fix.go:229] Guest: 2024-12-05 20:31:06.872047181 +0000 UTC Remote: 2024-12-05 20:31:06.785506394 +0000 UTC m=+234.970971247 (delta=86.540787ms)
	I1205 20:31:06.897679  585602 fix.go:200] guest clock delta is within tolerance: 86.540787ms
	I1205 20:31:06.897691  585602 start.go:83] releasing machines lock for "old-k8s-version-386085", held for 23.440303187s
	I1205 20:31:06.897727  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.898085  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:31:06.901127  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.901530  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.901567  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.901719  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.902413  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.902626  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.902776  585602 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:31:06.902827  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.902878  585602 ssh_runner.go:195] Run: cat /version.json
	I1205 20:31:06.902903  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.905664  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.905912  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.906050  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.906086  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.906256  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.906341  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.906367  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.906411  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.906517  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.906613  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.906684  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.906837  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.906849  585602 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:31:06.907112  585602 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:31:06.986078  585602 ssh_runner.go:195] Run: systemctl --version
	I1205 20:31:07.009500  585602 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:31:07.159146  585602 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:31:07.166263  585602 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:31:07.166358  585602 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:31:07.186021  585602 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:31:07.186063  585602 start.go:495] detecting cgroup driver to use...
	I1205 20:31:07.186140  585602 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:31:07.205074  585602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:31:07.221207  585602 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:31:07.221268  585602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:31:07.236669  585602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:31:07.252848  585602 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:31:07.369389  585602 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:31:07.504993  585602 docker.go:233] disabling docker service ...
	I1205 20:31:07.505101  585602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:31:07.523294  585602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:31:07.538595  585602 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:31:07.687830  585602 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:31:07.816176  585602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:31:07.833624  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:31:07.853409  585602 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1205 20:31:07.853478  585602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:07.865346  585602 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:31:07.865426  585602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:07.877962  585602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:07.889255  585602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:07.901632  585602 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:31:07.916169  585602 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:31:07.927092  585602 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:31:07.927169  585602 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:31:07.942288  585602 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:31:07.953314  585602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:08.092156  585602 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:31:08.205715  585602 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:31:08.205799  585602 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:31:08.214280  585602 start.go:563] Will wait 60s for crictl version
	I1205 20:31:08.214351  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:08.220837  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:31:08.265983  585602 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:31:08.266065  585602 ssh_runner.go:195] Run: crio --version
	I1205 20:31:08.295839  585602 ssh_runner.go:195] Run: crio --version
	I1205 20:31:08.327805  585602 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1205 20:31:07.014634  585113 pod_ready.go:103] pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:08.018024  585113 pod_ready.go:93] pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:08.018062  585113 pod_ready.go:82] duration metric: took 3.010340127s for pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.018080  585113 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.024700  585113 pod_ready.go:93] pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:08.024731  585113 pod_ready.go:82] duration metric: took 6.639434ms for pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.024744  585113 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-h79dj" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.030379  585113 pod_ready.go:93] pod "kube-proxy-h79dj" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:08.030399  585113 pod_ready.go:82] duration metric: took 5.648086ms for pod "kube-proxy-h79dj" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.030408  585113 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.036191  585113 pod_ready.go:93] pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:08.036211  585113 pod_ready.go:82] duration metric: took 5.797344ms for pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.036223  585113 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:10.051737  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:08.329278  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:31:08.332352  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:08.332700  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:08.332747  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:08.332930  585602 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1205 20:31:08.337611  585602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:08.350860  585602 kubeadm.go:883] updating cluster {Name:old-k8s-version-386085 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-386085 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:31:08.351016  585602 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 20:31:08.351090  585602 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:08.403640  585602 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 20:31:08.403716  585602 ssh_runner.go:195] Run: which lz4
	I1205 20:31:08.408211  585602 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 20:31:08.413136  585602 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:31:08.413168  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1205 20:31:10.209351  585602 crio.go:462] duration metric: took 1.801169802s to copy over tarball
	I1205 20:31:10.209438  585602 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:31:08.255781  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting to get IP...
	I1205 20:31:08.256721  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.257183  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.257262  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:08.257164  586715 retry.go:31] will retry after 301.077952ms: waiting for machine to come up
	I1205 20:31:08.559682  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.560187  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.560216  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:08.560130  586715 retry.go:31] will retry after 364.457823ms: waiting for machine to come up
	I1205 20:31:08.926774  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.927371  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.927401  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:08.927274  586715 retry.go:31] will retry after 461.958198ms: waiting for machine to come up
	I1205 20:31:09.390861  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:09.391502  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:09.391531  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:09.391432  586715 retry.go:31] will retry after 587.049038ms: waiting for machine to come up
	I1205 20:31:09.980451  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:09.980999  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:09.981026  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:09.980932  586715 retry.go:31] will retry after 499.551949ms: waiting for machine to come up
	I1205 20:31:10.482653  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:10.483188  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:10.483219  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:10.483135  586715 retry.go:31] will retry after 749.476034ms: waiting for machine to come up
	I1205 20:31:11.233788  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:11.234286  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:11.234315  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:11.234227  586715 retry.go:31] will retry after 768.81557ms: waiting for machine to come up
	I1205 20:31:12.004904  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:12.005427  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:12.005460  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:12.005382  586715 retry.go:31] will retry after 1.360132177s: waiting for machine to come up
	I1205 20:31:12.549406  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:15.043540  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:13.303553  585602 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.094044744s)
	I1205 20:31:13.303598  585602 crio.go:469] duration metric: took 3.094215888s to extract the tarball
	I1205 20:31:13.303610  585602 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:31:13.350989  585602 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:13.388660  585602 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 20:31:13.388702  585602 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 20:31:13.388814  585602 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:13.388822  585602 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.388832  585602 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.388853  585602 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.388881  585602 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.388904  585602 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1205 20:31:13.388823  585602 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.388859  585602 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.390414  585602 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1205 20:31:13.390924  585602 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.390941  585602 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.390924  585602 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.391016  585602 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.390927  585602 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.391373  585602 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.391378  585602 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:13.565006  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.577450  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1205 20:31:13.584653  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.597086  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.619848  585602 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1205 20:31:13.619899  585602 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.619955  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.623277  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.628407  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.697151  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.703111  585602 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1205 20:31:13.703167  585602 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1205 20:31:13.703219  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.736004  585602 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1205 20:31:13.736059  585602 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.736058  585602 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1205 20:31:13.736078  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.736094  585602 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.736104  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.736135  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.736187  585602 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1205 20:31:13.736207  585602 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.736235  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.783651  585602 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1205 20:31:13.783706  585602 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.783758  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.787597  585602 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1205 20:31:13.787649  585602 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.787656  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 20:31:13.787692  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.828445  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.828491  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.828544  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.828573  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.828616  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.828635  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.890937  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 20:31:13.992480  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.992480  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.992600  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.992661  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.992725  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.992780  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:14.095364  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 20:31:14.095462  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:14.163224  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1205 20:31:14.163320  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:14.163339  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:14.163420  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 20:31:14.163510  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:14.243805  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1205 20:31:14.243860  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1205 20:31:14.243881  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1205 20:31:14.287718  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1205 20:31:14.290994  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1205 20:31:14.291049  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1205 20:31:14.579648  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:14.728232  585602 cache_images.go:92] duration metric: took 1.339506459s to LoadCachedImages
	W1205 20:31:14.728389  585602 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I1205 20:31:14.728417  585602 kubeadm.go:934] updating node { 192.168.72.144 8443 v1.20.0 crio true true} ...
	I1205 20:31:14.728570  585602 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-386085 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386085 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:31:14.728672  585602 ssh_runner.go:195] Run: crio config
	I1205 20:31:14.778932  585602 cni.go:84] Creating CNI manager for ""
	I1205 20:31:14.778957  585602 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:31:14.778967  585602 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:31:14.778987  585602 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.144 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-386085 NodeName:old-k8s-version-386085 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1205 20:31:14.779131  585602 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-386085"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:31:14.779196  585602 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1205 20:31:14.792400  585602 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:31:14.792494  585602 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:31:14.802873  585602 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1205 20:31:14.821562  585602 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:31:14.839442  585602 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1205 20:31:14.861314  585602 ssh_runner.go:195] Run: grep 192.168.72.144	control-plane.minikube.internal$ /etc/hosts
	I1205 20:31:14.865457  585602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:14.878278  585602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:15.002193  585602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:31:15.030699  585602 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085 for IP: 192.168.72.144
	I1205 20:31:15.030734  585602 certs.go:194] generating shared ca certs ...
	I1205 20:31:15.030758  585602 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:31:15.030975  585602 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:31:15.031027  585602 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:31:15.031048  585602 certs.go:256] generating profile certs ...
	I1205 20:31:15.031206  585602 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.key
	I1205 20:31:15.031276  585602 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.key.87b35b18
	I1205 20:31:15.031324  585602 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.key
	I1205 20:31:15.031489  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:31:15.031535  585602 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:31:15.031550  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:31:15.031581  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:31:15.031612  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:31:15.031644  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:31:15.031698  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:15.032410  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:31:15.063090  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:31:15.094212  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:31:15.124685  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:31:15.159953  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1205 20:31:15.204250  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:31:15.237483  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:31:15.276431  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:31:15.303774  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:31:15.328872  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:31:15.353852  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:31:15.380916  585602 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:31:15.401082  585602 ssh_runner.go:195] Run: openssl version
	I1205 20:31:15.407442  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:31:15.420377  585602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:15.425721  585602 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:15.425800  585602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:15.432475  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:31:15.446140  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:31:15.459709  585602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:31:15.465165  585602 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:31:15.465241  585602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:31:15.471609  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:31:15.484139  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:31:15.496636  585602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:31:15.501575  585602 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:31:15.501634  585602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:31:15.507814  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:31:15.521234  585602 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:31:15.526452  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:31:15.532999  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:31:15.540680  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:31:15.547455  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:31:15.553996  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:31:15.560574  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:31:15.568489  585602 kubeadm.go:392] StartCluster: {Name:old-k8s-version-386085 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-386085 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:31:15.568602  585602 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:31:15.568682  585602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:31:15.610693  585602 cri.go:89] found id: ""
	I1205 20:31:15.610808  585602 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:31:15.622685  585602 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 20:31:15.622709  585602 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 20:31:15.622764  585602 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:31:15.633754  585602 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:31:15.634922  585602 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-386085" does not appear in /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:31:15.635682  585602 kubeconfig.go:62] /home/jenkins/minikube-integration/20052-530897/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-386085" cluster setting kubeconfig missing "old-k8s-version-386085" context setting]
	I1205 20:31:15.636878  585602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:31:15.719767  585602 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:31:15.731576  585602 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.144
	I1205 20:31:15.731622  585602 kubeadm.go:1160] stopping kube-system containers ...
	I1205 20:31:15.731639  585602 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:31:15.731705  585602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:31:15.777769  585602 cri.go:89] found id: ""
	I1205 20:31:15.777875  585602 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:31:15.797121  585602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:31:15.807961  585602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:31:15.807991  585602 kubeadm.go:157] found existing configuration files:
	
	I1205 20:31:15.808042  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:31:15.818177  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:31:15.818270  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:31:15.829092  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:31:15.839471  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:31:15.839564  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:31:15.850035  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:31:15.859907  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:31:15.859984  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:31:15.870882  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:31:15.881475  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:31:15.881549  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:31:15.892078  585602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:31:15.904312  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:16.042308  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:16.787487  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:13.367666  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:13.368154  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:13.368185  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:13.368096  586715 retry.go:31] will retry after 1.319101375s: waiting for machine to come up
	I1205 20:31:14.689562  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:14.690039  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:14.690067  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:14.689996  586715 retry.go:31] will retry after 2.267379471s: waiting for machine to come up
	I1205 20:31:16.959412  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:16.959882  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:16.959915  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:16.959804  586715 retry.go:31] will retry after 2.871837018s: waiting for machine to come up
	I1205 20:31:17.044878  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:19.543265  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:17.036864  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:17.128855  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:17.219276  585602 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:31:17.219380  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:17.720206  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:18.219623  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:18.719555  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:19.219776  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:19.719967  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:20.219686  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:20.719806  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:21.219875  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:21.719915  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:19.834750  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:19.835299  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:19.835326  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:19.835203  586715 retry.go:31] will retry after 2.740879193s: waiting for machine to come up
	I1205 20:31:22.577264  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:22.577746  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:22.577775  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:22.577709  586715 retry.go:31] will retry after 3.807887487s: waiting for machine to come up
	I1205 20:31:22.043635  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:24.543255  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:22.219930  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:22.719848  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:23.219674  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:23.719903  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:24.220505  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:24.719726  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:25.220161  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:25.720115  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:26.220399  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:26.719567  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:27.669618  585025 start.go:364] duration metric: took 59.106849765s to acquireMachinesLock for "no-preload-816185"
	I1205 20:31:27.669680  585025 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:31:27.669689  585025 fix.go:54] fixHost starting: 
	I1205 20:31:27.670111  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:31:27.670153  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:31:27.689600  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40519
	I1205 20:31:27.690043  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:31:27.690508  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:31:27.690530  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:31:27.690931  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:31:27.691146  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:27.691279  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetState
	I1205 20:31:27.692881  585025 fix.go:112] recreateIfNeeded on no-preload-816185: state=Stopped err=<nil>
	I1205 20:31:27.692905  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	W1205 20:31:27.693059  585025 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 20:31:27.694833  585025 out.go:177] * Restarting existing kvm2 VM for "no-preload-816185" ...
	I1205 20:31:26.389296  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.389828  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Found IP for machine: 192.168.50.96
	I1205 20:31:26.389866  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has current primary IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.389876  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Reserving static IP address...
	I1205 20:31:26.390321  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Reserved static IP address: 192.168.50.96
	I1205 20:31:26.390354  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for SSH to be available...
	I1205 20:31:26.390380  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-942599", mac: "52:54:00:f6:dd:0f", ip: "192.168.50.96"} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.390404  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | skip adding static IP to network mk-default-k8s-diff-port-942599 - found existing host DHCP lease matching {name: "default-k8s-diff-port-942599", mac: "52:54:00:f6:dd:0f", ip: "192.168.50.96"}
	I1205 20:31:26.390420  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Getting to WaitForSSH function...
	I1205 20:31:26.392509  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.392875  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.392912  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.392933  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Using SSH client type: external
	I1205 20:31:26.392988  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa (-rw-------)
	I1205 20:31:26.393057  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:31:26.393086  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | About to run SSH command:
	I1205 20:31:26.393105  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | exit 0
	I1205 20:31:26.520867  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | SSH cmd err, output: <nil>: 
	I1205 20:31:26.521212  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetConfigRaw
	I1205 20:31:26.521857  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetIP
	I1205 20:31:26.524512  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.524853  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.524883  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.525141  585929 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/config.json ...
	I1205 20:31:26.525404  585929 machine.go:93] provisionDockerMachine start ...
	I1205 20:31:26.525425  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:26.525639  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:26.527806  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.528094  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.528121  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.528257  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:26.528474  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.528635  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.528771  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:26.528902  585929 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:26.529132  585929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I1205 20:31:26.529147  585929 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:31:26.645385  585929 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 20:31:26.645429  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetMachineName
	I1205 20:31:26.645719  585929 buildroot.go:166] provisioning hostname "default-k8s-diff-port-942599"
	I1205 20:31:26.645751  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetMachineName
	I1205 20:31:26.645962  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:26.648906  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.649316  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.649346  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.649473  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:26.649686  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.649880  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.649998  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:26.650161  585929 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:26.650338  585929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I1205 20:31:26.650354  585929 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-942599 && echo "default-k8s-diff-port-942599" | sudo tee /etc/hostname
	I1205 20:31:26.780217  585929 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-942599
	
	I1205 20:31:26.780253  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:26.783240  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.783628  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.783660  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.783804  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:26.783997  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.784162  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.784321  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:26.784530  585929 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:26.784747  585929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I1205 20:31:26.784766  585929 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-942599' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-942599/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-942599' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:31:26.909975  585929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:31:26.910006  585929 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:31:26.910087  585929 buildroot.go:174] setting up certificates
	I1205 20:31:26.910101  585929 provision.go:84] configureAuth start
	I1205 20:31:26.910114  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetMachineName
	I1205 20:31:26.910440  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetIP
	I1205 20:31:26.913667  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.914067  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.914094  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.914321  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:26.917031  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.917430  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.917462  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.917608  585929 provision.go:143] copyHostCerts
	I1205 20:31:26.917681  585929 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:31:26.917706  585929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:31:26.917772  585929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:31:26.917889  585929 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:31:26.917900  585929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:31:26.917935  585929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:31:26.918013  585929 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:31:26.918023  585929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:31:26.918065  585929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:31:26.918163  585929 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-942599 san=[127.0.0.1 192.168.50.96 default-k8s-diff-port-942599 localhost minikube]
	I1205 20:31:27.003691  585929 provision.go:177] copyRemoteCerts
	I1205 20:31:27.003783  585929 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:31:27.003821  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.006311  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.006632  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.006665  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.006820  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.007011  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.007153  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.007274  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:31:27.094973  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:31:27.121684  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1205 20:31:27.146420  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:31:27.171049  585929 provision.go:87] duration metric: took 260.930345ms to configureAuth
	I1205 20:31:27.171083  585929 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:31:27.171268  585929 config.go:182] Loaded profile config "default-k8s-diff-port-942599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:31:27.171385  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.174287  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.174677  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.174717  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.174946  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.175168  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.175338  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.175531  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.175703  585929 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:27.175927  585929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I1205 20:31:27.175959  585929 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:31:27.416697  585929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:31:27.416724  585929 machine.go:96] duration metric: took 891.305367ms to provisionDockerMachine
	I1205 20:31:27.416737  585929 start.go:293] postStartSetup for "default-k8s-diff-port-942599" (driver="kvm2")
	I1205 20:31:27.416748  585929 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:31:27.416786  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:27.417143  585929 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:31:27.417183  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.419694  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.420041  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.420072  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.420259  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.420488  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.420681  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.420813  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:31:27.507592  585929 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:31:27.512178  585929 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:31:27.512209  585929 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:31:27.512297  585929 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:31:27.512416  585929 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:31:27.512544  585929 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:31:27.522860  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:27.550167  585929 start.go:296] duration metric: took 133.414654ms for postStartSetup
	I1205 20:31:27.550211  585929 fix.go:56] duration metric: took 20.652352836s for fixHost
	I1205 20:31:27.550240  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.553056  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.553456  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.553490  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.553631  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.553822  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.554007  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.554166  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.554372  585929 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:27.554584  585929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I1205 20:31:27.554603  585929 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:31:27.669428  585929 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430687.619179277
	
	I1205 20:31:27.669455  585929 fix.go:216] guest clock: 1733430687.619179277
	I1205 20:31:27.669467  585929 fix.go:229] Guest: 2024-12-05 20:31:27.619179277 +0000 UTC Remote: 2024-12-05 20:31:27.550217419 +0000 UTC m=+204.551998169 (delta=68.961858ms)
	I1205 20:31:27.669506  585929 fix.go:200] guest clock delta is within tolerance: 68.961858ms
	I1205 20:31:27.669514  585929 start.go:83] releasing machines lock for "default-k8s-diff-port-942599", held for 20.771694403s
	I1205 20:31:27.669559  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:27.669877  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetIP
	I1205 20:31:27.672547  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.672978  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.673009  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.673224  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:27.673788  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:27.673992  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:27.674125  585929 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:31:27.674176  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.674201  585929 ssh_runner.go:195] Run: cat /version.json
	I1205 20:31:27.674231  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.677006  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.677388  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.677418  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.677437  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.677565  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.677745  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.677919  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.677925  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.677948  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.678115  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.678107  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:31:27.678258  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.678382  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.678527  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:31:27.790786  585929 ssh_runner.go:195] Run: systemctl --version
	I1205 20:31:27.797092  585929 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:31:27.946053  585929 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:31:27.953979  585929 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:31:27.954073  585929 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:31:27.975059  585929 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:31:27.975090  585929 start.go:495] detecting cgroup driver to use...
	I1205 20:31:27.975160  585929 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:31:27.991738  585929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:31:28.006412  585929 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:31:28.006529  585929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:31:28.021329  585929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:31:28.037390  585929 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:31:28.155470  585929 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:31:28.326332  585929 docker.go:233] disabling docker service ...
	I1205 20:31:28.326415  585929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:31:28.343299  585929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:31:28.358147  585929 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:31:28.493547  585929 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:31:28.631184  585929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:31:28.647267  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:31:28.670176  585929 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:31:28.670269  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.686230  585929 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:31:28.686312  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.702991  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.715390  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.731909  585929 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:31:28.745042  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.757462  585929 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.779049  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.790960  585929 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:31:28.806652  585929 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:31:28.806724  585929 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:31:28.821835  585929 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:31:28.832688  585929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:28.967877  585929 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:31:29.084571  585929 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:31:29.084666  585929 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:31:29.089892  585929 start.go:563] Will wait 60s for crictl version
	I1205 20:31:29.089958  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:31:29.094021  585929 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:31:29.132755  585929 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:31:29.132843  585929 ssh_runner.go:195] Run: crio --version
	I1205 20:31:29.161779  585929 ssh_runner.go:195] Run: crio --version
	I1205 20:31:29.194415  585929 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:31:27.042893  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:29.545284  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:27.696342  585025 main.go:141] libmachine: (no-preload-816185) Calling .Start
	I1205 20:31:27.696546  585025 main.go:141] libmachine: (no-preload-816185) Ensuring networks are active...
	I1205 20:31:27.697272  585025 main.go:141] libmachine: (no-preload-816185) Ensuring network default is active
	I1205 20:31:27.697720  585025 main.go:141] libmachine: (no-preload-816185) Ensuring network mk-no-preload-816185 is active
	I1205 20:31:27.698153  585025 main.go:141] libmachine: (no-preload-816185) Getting domain xml...
	I1205 20:31:27.698993  585025 main.go:141] libmachine: (no-preload-816185) Creating domain...
	I1205 20:31:29.005551  585025 main.go:141] libmachine: (no-preload-816185) Waiting to get IP...
	I1205 20:31:29.006633  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:29.007124  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:29.007217  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:29.007100  586921 retry.go:31] will retry after 264.716976ms: waiting for machine to come up
	I1205 20:31:29.273821  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:29.274364  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:29.274393  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:29.274318  586921 retry.go:31] will retry after 307.156436ms: waiting for machine to come up
	I1205 20:31:29.582968  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:29.583583  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:29.583621  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:29.583531  586921 retry.go:31] will retry after 335.63624ms: waiting for machine to come up
	I1205 20:31:29.921262  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:29.921823  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:29.921855  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:29.921771  586921 retry.go:31] will retry after 577.408278ms: waiting for machine to come up
	I1205 20:31:30.500556  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:30.501058  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:30.501095  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:30.500999  586921 retry.go:31] will retry after 757.019094ms: waiting for machine to come up
	I1205 20:31:27.220124  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:27.719460  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:28.220187  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:28.719599  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:29.219672  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:29.720450  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:30.220436  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:30.719573  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:31.220357  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:31.720052  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:29.195845  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetIP
	I1205 20:31:29.198779  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:29.199138  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:29.199171  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:29.199365  585929 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1205 20:31:29.204553  585929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:29.217722  585929 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-942599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-942599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.96 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:31:29.217873  585929 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:31:29.217943  585929 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:29.259006  585929 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 20:31:29.259105  585929 ssh_runner.go:195] Run: which lz4
	I1205 20:31:29.264049  585929 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 20:31:29.268978  585929 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:31:29.269019  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 20:31:30.811247  585929 crio.go:462] duration metric: took 1.547244528s to copy over tarball
	I1205 20:31:30.811340  585929 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:31:32.043543  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:34.044420  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:31.260083  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:31.260626  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:31.260658  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:31.260593  586921 retry.go:31] will retry after 593.111543ms: waiting for machine to come up
	I1205 20:31:31.854850  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:31.855286  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:31.855316  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:31.855224  586921 retry.go:31] will retry after 832.693762ms: waiting for machine to come up
	I1205 20:31:32.690035  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:32.690489  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:32.690515  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:32.690448  586921 retry.go:31] will retry after 1.128242733s: waiting for machine to come up
	I1205 20:31:33.820162  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:33.820798  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:33.820831  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:33.820732  586921 retry.go:31] will retry after 1.331730925s: waiting for machine to come up
	I1205 20:31:35.154230  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:35.154661  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:35.154690  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:35.154590  586921 retry.go:31] will retry after 2.19623815s: waiting for machine to come up
	I1205 20:31:32.220318  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:32.719780  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:33.220114  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:33.719554  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:34.220187  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:34.720021  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:35.219461  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:35.720334  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.219480  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.720159  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:33.093756  585929 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.282380101s)
	I1205 20:31:33.093791  585929 crio.go:469] duration metric: took 2.282510298s to extract the tarball
	I1205 20:31:33.093802  585929 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:31:33.132232  585929 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:33.188834  585929 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:31:33.188868  585929 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:31:33.188879  585929 kubeadm.go:934] updating node { 192.168.50.96 8444 v1.31.2 crio true true} ...
	I1205 20:31:33.189027  585929 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-942599 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-942599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:31:33.189114  585929 ssh_runner.go:195] Run: crio config
	I1205 20:31:33.235586  585929 cni.go:84] Creating CNI manager for ""
	I1205 20:31:33.235611  585929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:31:33.235621  585929 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:31:33.235644  585929 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.96 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-942599 NodeName:default-k8s-diff-port-942599 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:31:33.235770  585929 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.96
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-942599"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.96"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.96"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:31:33.235835  585929 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:31:33.246737  585929 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:31:33.246829  585929 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:31:33.257763  585929 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1205 20:31:33.276025  585929 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:31:33.294008  585929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1205 20:31:33.311640  585929 ssh_runner.go:195] Run: grep 192.168.50.96	control-plane.minikube.internal$ /etc/hosts
	I1205 20:31:33.315963  585929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.96	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:33.328834  585929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:33.439221  585929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:31:33.457075  585929 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599 for IP: 192.168.50.96
	I1205 20:31:33.457103  585929 certs.go:194] generating shared ca certs ...
	I1205 20:31:33.457131  585929 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:31:33.457337  585929 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:31:33.457407  585929 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:31:33.457420  585929 certs.go:256] generating profile certs ...
	I1205 20:31:33.457528  585929 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/client.key
	I1205 20:31:33.457612  585929 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/apiserver.key.d50b8fb2
	I1205 20:31:33.457668  585929 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/proxy-client.key
	I1205 20:31:33.457824  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:31:33.457870  585929 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:31:33.457885  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:31:33.457924  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:31:33.457959  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:31:33.457989  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:31:33.458044  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:33.459092  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:31:33.502129  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:31:33.533461  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:31:33.572210  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:31:33.597643  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1205 20:31:33.621382  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:31:33.648568  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:31:33.682320  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:31:33.707415  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:31:33.733418  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:31:33.760333  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:31:33.794070  585929 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:31:33.813531  585929 ssh_runner.go:195] Run: openssl version
	I1205 20:31:33.820336  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:31:33.832321  585929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:33.839066  585929 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:33.839135  585929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:33.845526  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:31:33.857376  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:31:33.868864  585929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:31:33.873732  585929 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:31:33.873799  585929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:31:33.881275  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:31:33.893144  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:31:33.904679  585929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:31:33.909686  585929 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:31:33.909760  585929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:31:33.915937  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:31:33.927401  585929 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:31:33.932326  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:31:33.939165  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:31:33.945630  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:31:33.951867  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:31:33.957857  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:31:33.963994  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:31:33.969964  585929 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-942599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-942599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.96 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:31:33.970050  585929 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:31:33.970103  585929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:31:34.016733  585929 cri.go:89] found id: ""
	I1205 20:31:34.016814  585929 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:31:34.027459  585929 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 20:31:34.027478  585929 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 20:31:34.027523  585929 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:31:34.037483  585929 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:31:34.038588  585929 kubeconfig.go:125] found "default-k8s-diff-port-942599" server: "https://192.168.50.96:8444"
	I1205 20:31:34.041140  585929 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:31:34.050903  585929 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.96
	I1205 20:31:34.050938  585929 kubeadm.go:1160] stopping kube-system containers ...
	I1205 20:31:34.050956  585929 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:31:34.051014  585929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:31:34.090840  585929 cri.go:89] found id: ""
	I1205 20:31:34.090932  585929 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:31:34.107686  585929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:31:34.118277  585929 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:31:34.118305  585929 kubeadm.go:157] found existing configuration files:
	
	I1205 20:31:34.118359  585929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1205 20:31:34.127654  585929 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:31:34.127733  585929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:31:34.137295  585929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1205 20:31:34.147005  585929 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:31:34.147076  585929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:31:34.158576  585929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1205 20:31:34.167933  585929 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:31:34.168022  585929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:31:34.177897  585929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1205 20:31:34.187467  585929 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:31:34.187539  585929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:31:34.197825  585929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:31:34.210775  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:34.337491  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:35.308389  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:35.549708  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:35.624390  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:35.706794  585929 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:31:35.706912  585929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.207620  585929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.707990  585929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.727214  585929 api_server.go:72] duration metric: took 1.020418782s to wait for apiserver process to appear ...
	I1205 20:31:36.727257  585929 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:31:36.727289  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:36.727908  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": dial tcp 192.168.50.96:8444: connect: connection refused
	I1205 20:31:37.228102  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:36.544564  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:39.043806  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:37.352371  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:37.352911  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:37.352946  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:37.352862  586921 retry.go:31] will retry after 2.333670622s: waiting for machine to come up
	I1205 20:31:39.688034  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:39.688597  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:39.688630  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:39.688537  586921 retry.go:31] will retry after 2.476657304s: waiting for machine to come up
	I1205 20:31:37.219933  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:37.720360  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:38.219574  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:38.720034  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:39.219449  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:39.719752  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:40.219718  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:40.719771  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:41.219548  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:41.720381  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:42.228416  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:31:42.228489  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:41.044569  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:43.542439  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:45.543063  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:42.168384  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:42.168759  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:42.168781  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:42.168719  586921 retry.go:31] will retry after 3.531210877s: waiting for machine to come up
	I1205 20:31:45.701387  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.701831  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has current primary IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.701868  585025 main.go:141] libmachine: (no-preload-816185) Found IP for machine: 192.168.61.37
	I1205 20:31:45.701882  585025 main.go:141] libmachine: (no-preload-816185) Reserving static IP address...
	I1205 20:31:45.702270  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "no-preload-816185", mac: "52:54:00:5f:85:a7", ip: "192.168.61.37"} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:45.702313  585025 main.go:141] libmachine: (no-preload-816185) DBG | skip adding static IP to network mk-no-preload-816185 - found existing host DHCP lease matching {name: "no-preload-816185", mac: "52:54:00:5f:85:a7", ip: "192.168.61.37"}
	I1205 20:31:45.702327  585025 main.go:141] libmachine: (no-preload-816185) Reserved static IP address: 192.168.61.37
	I1205 20:31:45.702343  585025 main.go:141] libmachine: (no-preload-816185) Waiting for SSH to be available...
	I1205 20:31:45.702355  585025 main.go:141] libmachine: (no-preload-816185) DBG | Getting to WaitForSSH function...
	I1205 20:31:45.704606  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.704941  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:45.704964  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.705115  585025 main.go:141] libmachine: (no-preload-816185) DBG | Using SSH client type: external
	I1205 20:31:45.705146  585025 main.go:141] libmachine: (no-preload-816185) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa (-rw-------)
	I1205 20:31:45.705181  585025 main.go:141] libmachine: (no-preload-816185) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.37 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:31:45.705212  585025 main.go:141] libmachine: (no-preload-816185) DBG | About to run SSH command:
	I1205 20:31:45.705224  585025 main.go:141] libmachine: (no-preload-816185) DBG | exit 0
	I1205 20:31:45.828472  585025 main.go:141] libmachine: (no-preload-816185) DBG | SSH cmd err, output: <nil>: 
	I1205 20:31:45.828882  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetConfigRaw
	I1205 20:31:45.829596  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetIP
	I1205 20:31:45.832338  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.832643  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:45.832671  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.832970  585025 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/config.json ...
	I1205 20:31:45.833244  585025 machine.go:93] provisionDockerMachine start ...
	I1205 20:31:45.833275  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:45.833498  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:45.835937  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.836344  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:45.836375  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.836555  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:45.836744  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:45.836906  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:45.837046  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:45.837207  585025 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:45.837441  585025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.37 22 <nil> <nil>}
	I1205 20:31:45.837456  585025 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:31:45.940890  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 20:31:45.940926  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetMachineName
	I1205 20:31:45.941234  585025 buildroot.go:166] provisioning hostname "no-preload-816185"
	I1205 20:31:45.941262  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetMachineName
	I1205 20:31:45.941453  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:45.944124  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.944537  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:45.944585  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.944677  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:45.944862  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:45.945026  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:45.945169  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:45.945343  585025 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:45.945511  585025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.37 22 <nil> <nil>}
	I1205 20:31:45.945523  585025 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-816185 && echo "no-preload-816185" | sudo tee /etc/hostname
	I1205 20:31:42.220435  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:42.720366  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:43.219567  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:43.719652  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:44.220259  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:44.719556  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:45.219850  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:45.720302  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:46.220377  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:46.720107  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:47.229369  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:31:47.229421  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:46.063755  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-816185
	
	I1205 20:31:46.063794  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:46.066742  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.067177  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.067208  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.067371  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:46.067576  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.067756  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.067937  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:46.068147  585025 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:46.068392  585025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.37 22 <nil> <nil>}
	I1205 20:31:46.068411  585025 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-816185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-816185/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-816185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:31:46.182072  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:31:46.182110  585025 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:31:46.182144  585025 buildroot.go:174] setting up certificates
	I1205 20:31:46.182160  585025 provision.go:84] configureAuth start
	I1205 20:31:46.182172  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetMachineName
	I1205 20:31:46.182490  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetIP
	I1205 20:31:46.185131  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.185461  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.185493  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.185684  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:46.188070  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.188467  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.188499  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.188606  585025 provision.go:143] copyHostCerts
	I1205 20:31:46.188674  585025 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:31:46.188695  585025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:31:46.188753  585025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:31:46.188860  585025 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:31:46.188872  585025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:31:46.188892  585025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:31:46.188973  585025 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:31:46.188980  585025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:31:46.188998  585025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:31:46.189044  585025 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.no-preload-816185 san=[127.0.0.1 192.168.61.37 localhost minikube no-preload-816185]
	I1205 20:31:46.460195  585025 provision.go:177] copyRemoteCerts
	I1205 20:31:46.460323  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:31:46.460394  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:46.463701  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.464171  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.464224  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.464422  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:46.464646  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.464839  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:46.465024  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:31:46.557665  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 20:31:46.583225  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:31:46.608114  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:31:46.633059  585025 provision.go:87] duration metric: took 450.879004ms to configureAuth
	I1205 20:31:46.633100  585025 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:31:46.633319  585025 config.go:182] Loaded profile config "no-preload-816185": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:31:46.633400  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:46.636634  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.637103  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.637138  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.637368  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:46.637624  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.637841  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.638000  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:46.638189  585025 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:46.638425  585025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.37 22 <nil> <nil>}
	I1205 20:31:46.638442  585025 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:31:46.877574  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:31:46.877610  585025 machine.go:96] duration metric: took 1.044347044s to provisionDockerMachine
	I1205 20:31:46.877623  585025 start.go:293] postStartSetup for "no-preload-816185" (driver="kvm2")
	I1205 20:31:46.877634  585025 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:31:46.877668  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:46.878007  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:31:46.878046  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:46.881022  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.881361  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.881422  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.881554  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:46.881741  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.881883  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:46.882045  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:31:46.967997  585025 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:31:46.972667  585025 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:31:46.972697  585025 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:31:46.972770  585025 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:31:46.972844  585025 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:31:46.972931  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:31:46.983157  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:47.009228  585025 start.go:296] duration metric: took 131.588013ms for postStartSetup
	I1205 20:31:47.009272  585025 fix.go:56] duration metric: took 19.33958416s for fixHost
	I1205 20:31:47.009296  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:47.012039  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.012388  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:47.012416  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.012620  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:47.012858  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:47.013022  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:47.013166  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:47.013318  585025 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:47.013490  585025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.37 22 <nil> <nil>}
	I1205 20:31:47.013501  585025 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:31:47.117166  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430707.083043174
	
	I1205 20:31:47.117195  585025 fix.go:216] guest clock: 1733430707.083043174
	I1205 20:31:47.117203  585025 fix.go:229] Guest: 2024-12-05 20:31:47.083043174 +0000 UTC Remote: 2024-12-05 20:31:47.009275956 +0000 UTC m=+361.003271038 (delta=73.767218ms)
	I1205 20:31:47.117226  585025 fix.go:200] guest clock delta is within tolerance: 73.767218ms
	I1205 20:31:47.117232  585025 start.go:83] releasing machines lock for "no-preload-816185", held for 19.447576666s
	I1205 20:31:47.117259  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:47.117541  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetIP
	I1205 20:31:47.120283  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.120627  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:47.120653  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.120805  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:47.121301  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:47.121492  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:47.121612  585025 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:31:47.121656  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:47.121727  585025 ssh_runner.go:195] Run: cat /version.json
	I1205 20:31:47.121750  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:47.124146  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.124387  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.124503  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:47.124530  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.124723  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:47.124745  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.124745  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:47.124922  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:47.124933  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:47.125086  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:47.125126  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:47.125227  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:31:47.125505  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:47.125653  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:31:47.221731  585025 ssh_runner.go:195] Run: systemctl --version
	I1205 20:31:47.228177  585025 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:31:47.377695  585025 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:31:47.384534  585025 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:31:47.384623  585025 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:31:47.402354  585025 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:31:47.402388  585025 start.go:495] detecting cgroup driver to use...
	I1205 20:31:47.402454  585025 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:31:47.426593  585025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:31:47.443953  585025 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:31:47.444011  585025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:31:47.461107  585025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:31:47.477872  585025 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:31:47.617097  585025 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:31:47.780021  585025 docker.go:233] disabling docker service ...
	I1205 20:31:47.780140  585025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:31:47.795745  585025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:31:47.809573  585025 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:31:47.959910  585025 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:31:48.081465  585025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:31:48.096513  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:31:48.116342  585025 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:31:48.116409  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.128016  585025 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:31:48.128095  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.139511  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.151241  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.162858  585025 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:31:48.174755  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.185958  585025 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.203724  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.215682  585025 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:31:48.226478  585025 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:31:48.226551  585025 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:31:48.242781  585025 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:31:48.254921  585025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:48.373925  585025 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:31:48.471515  585025 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:31:48.471625  585025 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:31:48.477640  585025 start.go:563] Will wait 60s for crictl version
	I1205 20:31:48.477707  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:48.481862  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:31:48.521367  585025 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:31:48.521465  585025 ssh_runner.go:195] Run: crio --version
	I1205 20:31:48.552343  585025 ssh_runner.go:195] Run: crio --version
	I1205 20:31:48.583089  585025 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:31:48.043043  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:50.043172  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:48.584504  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetIP
	I1205 20:31:48.587210  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:48.587539  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:48.587568  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:48.587788  585025 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1205 20:31:48.592190  585025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:48.606434  585025 kubeadm.go:883] updating cluster {Name:no-preload-816185 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-816185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.37 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:31:48.606605  585025 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:31:48.606666  585025 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:48.642948  585025 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 20:31:48.642978  585025 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 20:31:48.643061  585025 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:48.643116  585025 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:48.643092  585025 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:48.643168  585025 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:48.643075  585025 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:48.643116  585025 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:48.643248  585025 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1205 20:31:48.643119  585025 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:48.644692  585025 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:48.644712  585025 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1205 20:31:48.644694  585025 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:48.644798  585025 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:48.644800  585025 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:48.644824  585025 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:48.644858  585025 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:48.644824  585025 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:48.811007  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:48.819346  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:48.859678  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1205 20:31:48.864065  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:48.864191  585025 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1205 20:31:48.864249  585025 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:48.864310  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:48.883959  585025 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1205 20:31:48.884022  585025 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:48.884078  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:48.902180  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:48.918167  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:48.946617  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:49.039706  585025 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1205 20:31:49.039760  585025 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:49.039783  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:49.039808  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:49.039869  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:49.039887  585025 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1205 20:31:49.039913  585025 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:49.039938  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:49.039947  585025 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1205 20:31:49.039969  585025 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:49.040001  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:49.040002  585025 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1205 20:31:49.040026  585025 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:49.040069  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:49.098900  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:49.098990  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:49.105551  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:49.105588  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:49.105612  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:49.105646  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:49.201473  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:49.218211  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:49.257277  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:49.257335  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:49.257345  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:49.257479  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:49.316037  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1205 20:31:49.316135  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:49.316159  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 20:31:49.356780  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1205 20:31:49.356906  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1205 20:31:49.382843  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:49.405772  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:49.405863  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:49.428491  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1205 20:31:49.428541  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1205 20:31:49.428563  585025 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 20:31:49.428587  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1205 20:31:49.428611  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 20:31:49.428648  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 20:31:49.487794  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1205 20:31:49.487825  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1205 20:31:49.487893  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1205 20:31:49.487917  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1205 20:31:49.487927  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 20:31:49.488022  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 20:31:49.830311  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:47.219913  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:47.720441  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:48.220220  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:48.719997  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:49.219843  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:49.719591  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:50.220132  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:50.719528  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:51.219674  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:51.720234  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:52.230527  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:31:52.230575  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:52.543415  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:55.042668  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:52.150499  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.721854606s)
	I1205 20:31:52.150547  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1205 20:31:52.150573  585025 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1205 20:31:52.150588  585025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.721911838s)
	I1205 20:31:52.150623  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1205 20:31:52.150627  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1205 20:31:52.150697  585025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: (2.662646854s)
	I1205 20:31:52.150727  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1205 20:31:52.150752  585025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.662648047s)
	I1205 20:31:52.150776  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1205 20:31:52.150785  585025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.662799282s)
	I1205 20:31:52.150804  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1205 20:31:52.150834  585025 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.320487562s)
	I1205 20:31:52.150874  585025 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1205 20:31:52.150907  585025 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:52.150943  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:55.858372  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.707687772s)
	I1205 20:31:55.858414  585025 ssh_runner.go:235] Completed: which crictl: (3.707446137s)
	I1205 20:31:55.858498  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:55.858426  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1205 20:31:55.858580  585025 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 20:31:55.858640  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 20:31:55.901375  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:52.219602  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:52.719522  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:53.220117  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:53.720426  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:54.220177  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:54.720100  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:55.219569  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:55.719796  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:56.219490  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:56.720420  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:57.231370  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:31:57.231415  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:57.612431  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": read tcp 192.168.50.1:36198->192.168.50.96:8444: read: connection reset by peer
	I1205 20:31:57.727638  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:57.728368  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": dial tcp 192.168.50.96:8444: connect: connection refused
	I1205 20:31:57.042989  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:59.043517  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:57.843623  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.984954959s)
	I1205 20:31:57.843662  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1205 20:31:57.843683  585025 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 20:31:57.843731  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 20:31:57.843732  585025 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.942323285s)
	I1205 20:31:57.843821  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:32:00.030765  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.186998467s)
	I1205 20:32:00.030810  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1205 20:32:00.030840  585025 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 20:32:00.030846  585025 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.18699947s)
	I1205 20:32:00.030897  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 20:32:00.030906  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 20:32:00.031026  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:31:57.219497  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:57.720337  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:58.219807  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:58.720112  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:59.219949  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:59.719626  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:00.219871  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:00.719466  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:01.219491  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:01.719760  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:58.227807  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:01.044658  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:03.542453  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:05.542887  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:01.486433  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.455500806s)
	I1205 20:32:01.486479  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1205 20:32:01.486512  585025 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1205 20:32:01.486513  585025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.455460879s)
	I1205 20:32:01.486589  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1205 20:32:01.486592  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1205 20:32:03.658906  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.172262326s)
	I1205 20:32:03.658947  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1205 20:32:03.658979  585025 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:32:03.659024  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:32:04.304774  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 20:32:04.304825  585025 cache_images.go:123] Successfully loaded all cached images
	I1205 20:32:04.304832  585025 cache_images.go:92] duration metric: took 15.661840579s to LoadCachedImages
	I1205 20:32:04.304846  585025 kubeadm.go:934] updating node { 192.168.61.37 8443 v1.31.2 crio true true} ...
	I1205 20:32:04.304983  585025 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-816185 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.37
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-816185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:32:04.305057  585025 ssh_runner.go:195] Run: crio config
	I1205 20:32:04.350303  585025 cni.go:84] Creating CNI manager for ""
	I1205 20:32:04.350332  585025 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:32:04.350352  585025 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:32:04.350383  585025 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.37 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-816185 NodeName:no-preload-816185 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.37"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.37 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:32:04.350534  585025 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.37
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-816185"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.37"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.37"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:32:04.350618  585025 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:32:04.362733  585025 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:32:04.362815  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:32:04.374219  585025 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1205 20:32:04.392626  585025 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:32:04.409943  585025 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I1205 20:32:04.428180  585025 ssh_runner.go:195] Run: grep 192.168.61.37	control-plane.minikube.internal$ /etc/hosts
	I1205 20:32:04.432433  585025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.37	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:32:04.447274  585025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:32:04.591755  585025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:32:04.609441  585025 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185 for IP: 192.168.61.37
	I1205 20:32:04.609472  585025 certs.go:194] generating shared ca certs ...
	I1205 20:32:04.609494  585025 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:32:04.609664  585025 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:32:04.609729  585025 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:32:04.609745  585025 certs.go:256] generating profile certs ...
	I1205 20:32:04.609910  585025 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/client.key
	I1205 20:32:04.609991  585025 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/apiserver.key.e9b85612
	I1205 20:32:04.610027  585025 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/proxy-client.key
	I1205 20:32:04.610146  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:32:04.610173  585025 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:32:04.610182  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:32:04.610216  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:32:04.610264  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:32:04.610313  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:32:04.610377  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:32:04.611264  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:32:04.642976  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:32:04.679840  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:32:04.707526  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:32:04.746333  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 20:32:04.782671  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:32:04.819333  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:32:04.845567  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:32:04.870304  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:32:04.894597  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:32:04.918482  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:32:04.942992  585025 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:32:04.960576  585025 ssh_runner.go:195] Run: openssl version
	I1205 20:32:04.966908  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:32:04.978238  585025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:32:04.982959  585025 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:32:04.983023  585025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:32:04.989070  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:32:05.000979  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:32:05.012901  585025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:32:05.017583  585025 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:32:05.018169  585025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:32:05.025450  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:32:05.037419  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:32:05.050366  585025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:32:05.055211  585025 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:32:05.055255  585025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:32:05.061388  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:32:05.074182  585025 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:32:05.079129  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:32:05.085580  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:32:05.091938  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:32:05.099557  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:32:05.105756  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:32:05.112019  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:32:05.118426  585025 kubeadm.go:392] StartCluster: {Name:no-preload-816185 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-816185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.37 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:32:05.118540  585025 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:32:05.118622  585025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:32:05.162731  585025 cri.go:89] found id: ""
	I1205 20:32:05.162821  585025 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:32:05.174100  585025 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 20:32:05.174127  585025 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 20:32:05.174181  585025 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:32:05.184949  585025 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:32:05.186127  585025 kubeconfig.go:125] found "no-preload-816185" server: "https://192.168.61.37:8443"
	I1205 20:32:05.188601  585025 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:32:05.198779  585025 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.37
	I1205 20:32:05.198815  585025 kubeadm.go:1160] stopping kube-system containers ...
	I1205 20:32:05.198828  585025 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:32:05.198881  585025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:32:05.241175  585025 cri.go:89] found id: ""
	I1205 20:32:05.241247  585025 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:32:05.259698  585025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:32:05.270282  585025 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:32:05.270310  585025 kubeadm.go:157] found existing configuration files:
	
	I1205 20:32:05.270370  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:32:05.280440  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:32:05.280519  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:32:05.290825  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:32:05.300680  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:32:05.300745  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:32:05.311108  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:32:05.320854  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:32:05.320918  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:32:05.331099  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:32:05.340948  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:32:05.341017  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:32:05.351280  585025 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:32:05.361567  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:05.477138  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:02.220337  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:02.720145  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:03.219463  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:03.719913  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:04.219813  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:04.719940  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:05.219830  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:05.720324  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:06.220287  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:06.719584  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:03.228372  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:32:03.228433  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:08.042416  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:10.043011  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:06.259256  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:06.483460  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:06.557633  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:06.666782  585025 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:32:06.666885  585025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:07.167840  585025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:07.667069  585025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:07.701559  585025 api_server.go:72] duration metric: took 1.034769472s to wait for apiserver process to appear ...
	I1205 20:32:07.701592  585025 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:32:07.701612  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:10.640462  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:32:10.640498  585025 api_server.go:103] status: https://192.168.61.37:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:32:10.640521  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:10.647093  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:32:10.647118  585025 api_server.go:103] status: https://192.168.61.37:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:32:10.702286  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:10.711497  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:10.711528  585025 api_server.go:103] status: https://192.168.61.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:07.219989  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:07.720289  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:08.220381  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:08.719947  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:09.219838  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:09.719666  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:10.219756  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:10.720312  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:11.220369  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:11.720004  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:11.202247  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:11.206625  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:11.206650  585025 api_server.go:103] status: https://192.168.61.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:11.702760  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:11.718941  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:11.718974  585025 api_server.go:103] status: https://192.168.61.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:12.202567  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:12.207589  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 200:
	ok
	I1205 20:32:12.214275  585025 api_server.go:141] control plane version: v1.31.2
	I1205 20:32:12.214304  585025 api_server.go:131] duration metric: took 4.512704501s to wait for apiserver health ...
	I1205 20:32:12.214314  585025 cni.go:84] Creating CNI manager for ""
	I1205 20:32:12.214321  585025 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:32:12.216193  585025 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:32:08.229499  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:32:08.229544  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:12.545378  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:15.043628  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:12.217640  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:32:12.241907  585025 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:32:12.262114  585025 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:32:12.275246  585025 system_pods.go:59] 8 kube-system pods found
	I1205 20:32:12.275296  585025 system_pods.go:61] "coredns-7c65d6cfc9-j2hr2" [9ce413ab-c304-40dd-af68-80f15db0e2ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:32:12.275308  585025 system_pods.go:61] "etcd-no-preload-816185" [ddc20062-02d9-4f9d-a2fb-fa2c7d6aa1cc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:32:12.275319  585025 system_pods.go:61] "kube-apiserver-no-preload-816185" [07ff76f2-b05e-4434-b8f9-448bc200507a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:32:12.275328  585025 system_pods.go:61] "kube-controller-manager-no-preload-816185" [7c701058-791a-4097-a913-f6989a791067] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:32:12.275340  585025 system_pods.go:61] "kube-proxy-rjp4j" [340e9ccc-0290-4d3d-829c-44ad65410f3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:32:12.275348  585025 system_pods.go:61] "kube-scheduler-no-preload-816185" [c2f3b04c-9e3a-4060-a6d0-fb9eb2aa5e55] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 20:32:12.275359  585025 system_pods.go:61] "metrics-server-6867b74b74-vjwq2" [47ff24fe-0edb-4d06-b280-a0d965b25dae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:32:12.275367  585025 system_pods.go:61] "storage-provisioner" [bd385e87-56ea-417c-a4a8-b8a6e4f94114] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:32:12.275376  585025 system_pods.go:74] duration metric: took 13.23725ms to wait for pod list to return data ...
	I1205 20:32:12.275387  585025 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:32:12.279719  585025 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:32:12.279746  585025 node_conditions.go:123] node cpu capacity is 2
	I1205 20:32:12.279755  585025 node_conditions.go:105] duration metric: took 4.364464ms to run NodePressure ...
	I1205 20:32:12.279774  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:12.562221  585025 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 20:32:12.566599  585025 kubeadm.go:739] kubelet initialised
	I1205 20:32:12.566627  585025 kubeadm.go:740] duration metric: took 4.374855ms waiting for restarted kubelet to initialise ...
	I1205 20:32:12.566639  585025 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:32:12.571780  585025 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-j2hr2" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:14.579614  585025 pod_ready.go:103] pod "coredns-7c65d6cfc9-j2hr2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:12.220304  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:12.720348  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:13.219553  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:13.720078  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:14.219614  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:14.719625  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:15.220118  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:15.720577  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:16.220392  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:16.719538  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:13.230519  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:32:13.230567  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:16.061543  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:32:16.061583  585929 api_server.go:103] status: https://192.168.50.96:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:32:16.061603  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:16.078424  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:32:16.078457  585929 api_server.go:103] status: https://192.168.50.96:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:32:16.227852  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:16.553664  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:16.553705  585929 api_server.go:103] status: https://192.168.50.96:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:16.728155  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:16.734800  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:16.734853  585929 api_server.go:103] status: https://192.168.50.96:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:17.228013  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:17.233541  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:17.233577  585929 api_server.go:103] status: https://192.168.50.96:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:17.727878  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:17.736731  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 200:
	ok
	I1205 20:32:17.746474  585929 api_server.go:141] control plane version: v1.31.2
	I1205 20:32:17.746511  585929 api_server.go:131] duration metric: took 41.019245279s to wait for apiserver health ...
	I1205 20:32:17.746523  585929 cni.go:84] Creating CNI manager for ""
	I1205 20:32:17.746531  585929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:32:17.748464  585929 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:32:17.750113  585929 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:32:17.762750  585929 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:32:17.786421  585929 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:32:17.826859  585929 system_pods.go:59] 8 kube-system pods found
	I1205 20:32:17.826918  585929 system_pods.go:61] "coredns-7c65d6cfc9-5drgc" [4adbcbc8-0974-4ed3-90d4-fc7f75ff83b6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:32:17.826934  585929 system_pods.go:61] "etcd-default-k8s-diff-port-942599" [4041a965-abf4-45b3-a180-118601e72573] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:32:17.826946  585929 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-942599" [ae1d7788-4feb-4e02-b0b2-bcaff984ff99] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:32:17.826959  585929 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-942599" [5cfb734e-5a10-4066-95a1-b884817a0aea] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:32:17.826969  585929 system_pods.go:61] "kube-proxy-5vdcq" [be2e18fd-6980-45c9-87a4-f6d1ed31bf7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:32:17.826980  585929 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-942599" [8deda727-a6c3-4523-8755-76217f6a8ddb] Running
	I1205 20:32:17.826989  585929 system_pods.go:61] "metrics-server-6867b74b74-rq8xm" [99b577fd-fbfd-4178-8b06-ef96f118c30b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:32:17.827000  585929 system_pods.go:61] "storage-provisioner" [8a858ec2-dc10-4501-8efa-72e2ea0c7927] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:32:17.827010  585929 system_pods.go:74] duration metric: took 40.565274ms to wait for pod list to return data ...
	I1205 20:32:17.827025  585929 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:32:17.838000  585929 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:32:17.838034  585929 node_conditions.go:123] node cpu capacity is 2
	I1205 20:32:17.838050  585929 node_conditions.go:105] duration metric: took 11.010352ms to run NodePressure ...
	I1205 20:32:17.838075  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:18.215713  585929 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 20:32:18.222162  585929 kubeadm.go:739] kubelet initialised
	I1205 20:32:18.222187  585929 kubeadm.go:740] duration metric: took 6.444578ms waiting for restarted kubelet to initialise ...
	I1205 20:32:18.222199  585929 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:32:18.226988  585929 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:18.235570  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.235600  585929 pod_ready.go:82] duration metric: took 8.582972ms for pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:18.235609  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.235617  585929 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:18.242596  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.242623  585929 pod_ready.go:82] duration metric: took 6.99814ms for pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:18.242634  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.242642  585929 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:18.248351  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.248373  585929 pod_ready.go:82] duration metric: took 5.725371ms for pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:18.248383  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.248390  585929 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:18.258151  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.258174  585929 pod_ready.go:82] duration metric: took 9.778119ms for pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:18.258183  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.258190  585929 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5vdcq" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:18.619579  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "kube-proxy-5vdcq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.619623  585929 pod_ready.go:82] duration metric: took 361.426091ms for pod "kube-proxy-5vdcq" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:18.619638  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "kube-proxy-5vdcq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.619649  585929 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:19.019623  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:19.019655  585929 pod_ready.go:82] duration metric: took 399.997558ms for pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:19.019669  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:19.019676  585929 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:19.420201  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:19.420228  585929 pod_ready.go:82] duration metric: took 400.54576ms for pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:19.420242  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:19.420251  585929 pod_ready.go:39] duration metric: took 1.198040831s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:32:19.420292  585929 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:32:19.434385  585929 ops.go:34] apiserver oom_adj: -16
	I1205 20:32:19.434420  585929 kubeadm.go:597] duration metric: took 45.406934122s to restartPrimaryControlPlane
	I1205 20:32:19.434434  585929 kubeadm.go:394] duration metric: took 45.464483994s to StartCluster
	I1205 20:32:19.434460  585929 settings.go:142] acquiring lock: {Name:mk53b9e6d652790a330d8f10370186624dd74692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:32:19.434560  585929 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:32:19.436299  585929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:32:19.436590  585929 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.96 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:32:19.436736  585929 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 20:32:19.436837  585929 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-942599"
	I1205 20:32:19.436858  585929 config.go:182] Loaded profile config "default-k8s-diff-port-942599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:32:19.436873  585929 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-942599"
	W1205 20:32:19.436883  585929 addons.go:243] addon storage-provisioner should already be in state true
	I1205 20:32:19.436923  585929 host.go:66] Checking if "default-k8s-diff-port-942599" exists ...
	I1205 20:32:19.436938  585929 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-942599"
	I1205 20:32:19.436974  585929 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-942599"
	I1205 20:32:19.436922  585929 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-942599"
	I1205 20:32:19.437024  585929 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-942599"
	W1205 20:32:19.437051  585929 addons.go:243] addon metrics-server should already be in state true
	I1205 20:32:19.437090  585929 host.go:66] Checking if "default-k8s-diff-port-942599" exists ...
	I1205 20:32:19.437365  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.437407  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.437452  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.437480  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.437509  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.437514  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.438584  585929 out.go:177] * Verifying Kubernetes components...
	I1205 20:32:19.440376  585929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:32:19.453761  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I1205 20:32:19.453782  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44087
	I1205 20:32:19.453767  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33855
	I1205 20:32:19.454289  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.454441  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.454451  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.454851  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.454871  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.454981  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.454981  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.455005  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.455021  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.455286  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.455350  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.455409  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.455461  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetState
	I1205 20:32:19.455910  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.455927  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.455958  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.455966  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.458587  585929 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-942599"
	W1205 20:32:19.458605  585929 addons.go:243] addon default-storageclass should already be in state true
	I1205 20:32:19.458627  585929 host.go:66] Checking if "default-k8s-diff-port-942599" exists ...
	I1205 20:32:19.458955  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.458995  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.472175  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37545
	I1205 20:32:19.472667  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.472927  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37223
	I1205 20:32:19.473215  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.473233  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.473401  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40929
	I1205 20:32:19.473570  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.473608  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.473839  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.473933  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetState
	I1205 20:32:19.474155  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.474187  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.474290  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.474313  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.474546  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.474638  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.474711  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetState
	I1205 20:32:19.475267  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.475320  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.476105  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:32:19.476447  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:32:19.478117  585929 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:32:19.478117  585929 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:32:17.545165  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:20.044285  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:17.079986  585025 pod_ready.go:93] pod "coredns-7c65d6cfc9-j2hr2" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:17.080014  585025 pod_ready.go:82] duration metric: took 4.508210865s for pod "coredns-7c65d6cfc9-j2hr2" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:17.080025  585025 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:19.086070  585025 pod_ready.go:103] pod "etcd-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:20.587742  585025 pod_ready.go:93] pod "etcd-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:20.587775  585025 pod_ready.go:82] duration metric: took 3.507742173s for pod "etcd-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:20.587789  585025 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:19.479638  585929 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:32:19.479658  585929 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:32:19.479686  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:32:19.479719  585929 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:32:19.479737  585929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:32:19.479750  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:32:19.483208  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.483350  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.483773  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:32:19.483790  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.483873  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:32:19.483887  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.483936  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:32:19.484123  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:32:19.484166  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:32:19.484294  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:32:19.484324  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:32:19.484438  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:32:19.484456  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:32:19.484571  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:32:19.533651  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34539
	I1205 20:32:19.534273  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.534802  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.534833  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.535282  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.535535  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetState
	I1205 20:32:19.538221  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:32:19.538787  585929 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:32:19.538804  585929 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:32:19.538825  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:32:19.541876  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.542318  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:32:19.542354  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.542556  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:32:19.542744  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:32:19.542944  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:32:19.543129  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:32:19.630282  585929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:32:19.652591  585929 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-942599" to be "Ready" ...
	I1205 20:32:19.719058  585929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:32:19.810931  585929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:32:19.812113  585929 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:32:19.812136  585929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:32:19.875725  585929 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:32:19.875761  585929 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:32:19.946353  585929 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:32:19.946390  585929 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:32:20.010445  585929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:32:20.231055  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:20.231082  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:20.231425  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:20.231454  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:20.231469  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:20.231478  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:20.231476  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Closing plugin on server side
	I1205 20:32:20.231764  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:20.231784  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:20.231783  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Closing plugin on server side
	I1205 20:32:20.247021  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:20.247051  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:20.247463  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:20.247490  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:20.247488  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Closing plugin on server side
	I1205 20:32:21.074948  585929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.263976727s)
	I1205 20:32:21.075015  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:21.075029  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:21.075397  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:21.075438  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:21.075449  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:21.075457  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:21.075745  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Closing plugin on server side
	I1205 20:32:21.075766  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:21.075785  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:21.134215  585929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.123724822s)
	I1205 20:32:21.134271  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:21.134285  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:21.134588  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:21.134604  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:21.134612  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:21.134615  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Closing plugin on server side
	I1205 20:32:21.134620  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:21.134878  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:21.134891  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:21.134904  585929 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-942599"
	I1205 20:32:21.136817  585929 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:32:17.220437  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:17.220539  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:17.272666  585602 cri.go:89] found id: ""
	I1205 20:32:17.272702  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.272716  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:17.272723  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:17.272797  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:17.314947  585602 cri.go:89] found id: ""
	I1205 20:32:17.314977  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.314989  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:17.314996  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:17.315061  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:17.354511  585602 cri.go:89] found id: ""
	I1205 20:32:17.354548  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.354561  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:17.354571  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:17.354640  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:17.393711  585602 cri.go:89] found id: ""
	I1205 20:32:17.393745  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.393759  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:17.393768  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:17.393836  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:17.434493  585602 cri.go:89] found id: ""
	I1205 20:32:17.434526  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.434535  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:17.434541  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:17.434602  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:17.476201  585602 cri.go:89] found id: ""
	I1205 20:32:17.476235  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.476245  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:17.476253  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:17.476341  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:17.516709  585602 cri.go:89] found id: ""
	I1205 20:32:17.516745  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.516755  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:17.516762  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:17.516818  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:17.557270  585602 cri.go:89] found id: ""
	I1205 20:32:17.557305  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.557314  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:17.557324  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:17.557348  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:17.606494  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:17.606540  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:17.681372  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:17.681412  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:17.696778  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:17.696816  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:17.839655  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:17.839679  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:17.839717  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:20.423552  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:20.439794  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:20.439875  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:20.482820  585602 cri.go:89] found id: ""
	I1205 20:32:20.482866  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.482880  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:20.482888  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:20.482958  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:20.523590  585602 cri.go:89] found id: ""
	I1205 20:32:20.523629  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.523641  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:20.523649  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:20.523727  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:20.601603  585602 cri.go:89] found id: ""
	I1205 20:32:20.601638  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.601648  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:20.601656  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:20.601728  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:20.643927  585602 cri.go:89] found id: ""
	I1205 20:32:20.643959  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.643972  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:20.643981  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:20.644054  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:20.690935  585602 cri.go:89] found id: ""
	I1205 20:32:20.690964  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.690975  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:20.690984  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:20.691054  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:20.728367  585602 cri.go:89] found id: ""
	I1205 20:32:20.728400  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.728412  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:20.728420  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:20.728489  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:20.766529  585602 cri.go:89] found id: ""
	I1205 20:32:20.766562  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.766571  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:20.766578  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:20.766657  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:20.805641  585602 cri.go:89] found id: ""
	I1205 20:32:20.805680  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.805690  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:20.805701  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:20.805718  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:20.884460  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:20.884495  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:20.884514  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:20.998367  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:20.998429  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:21.041210  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:21.041247  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:21.103519  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:21.103557  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:21.138175  585929 addons.go:510] duration metric: took 1.701453382s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:32:21.657269  585929 node_ready.go:53] node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:22.541880  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:24.543481  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:22.595422  585025 pod_ready.go:103] pod "kube-apiserver-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:23.594392  585025 pod_ready.go:93] pod "kube-apiserver-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:23.594419  585025 pod_ready.go:82] duration metric: took 3.006622534s for pod "kube-apiserver-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:23.594430  585025 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:25.601616  585025 pod_ready.go:103] pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:23.619187  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:23.633782  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:23.633872  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:23.679994  585602 cri.go:89] found id: ""
	I1205 20:32:23.680023  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.680032  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:23.680038  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:23.680094  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:23.718362  585602 cri.go:89] found id: ""
	I1205 20:32:23.718425  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.718439  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:23.718447  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:23.718520  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:23.758457  585602 cri.go:89] found id: ""
	I1205 20:32:23.758491  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.758500  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:23.758506  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:23.758558  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:23.794612  585602 cri.go:89] found id: ""
	I1205 20:32:23.794649  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.794662  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:23.794671  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:23.794738  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:23.832309  585602 cri.go:89] found id: ""
	I1205 20:32:23.832341  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.832354  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:23.832361  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:23.832421  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:23.868441  585602 cri.go:89] found id: ""
	I1205 20:32:23.868472  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.868484  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:23.868492  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:23.868573  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:23.902996  585602 cri.go:89] found id: ""
	I1205 20:32:23.903025  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.903036  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:23.903050  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:23.903115  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:23.939830  585602 cri.go:89] found id: ""
	I1205 20:32:23.939865  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.939879  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:23.939892  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:23.939909  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:23.992310  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:23.992354  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:24.007378  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:24.007414  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:24.077567  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:24.077594  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:24.077608  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:24.165120  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:24.165163  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:26.711674  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:26.726923  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:26.727008  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:26.763519  585602 cri.go:89] found id: ""
	I1205 20:32:26.763554  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.763563  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:26.763570  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:26.763628  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:26.802600  585602 cri.go:89] found id: ""
	I1205 20:32:26.802635  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.802644  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:26.802650  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:26.802705  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:26.839920  585602 cri.go:89] found id: ""
	I1205 20:32:26.839967  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.839981  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:26.839989  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:26.840076  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:24.157515  585929 node_ready.go:53] node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:26.657197  585929 node_ready.go:53] node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:27.656811  585929 node_ready.go:49] node "default-k8s-diff-port-942599" has status "Ready":"True"
	I1205 20:32:27.656842  585929 node_ready.go:38] duration metric: took 8.004215314s for node "default-k8s-diff-port-942599" to be "Ready" ...
	I1205 20:32:27.656854  585929 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:32:27.662792  585929 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.668485  585929 pod_ready.go:93] pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:27.668510  585929 pod_ready.go:82] duration metric: took 5.690516ms for pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.668521  585929 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:26.543536  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:28.544214  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:27.101514  585025 pod_ready.go:93] pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:27.101540  585025 pod_ready.go:82] duration metric: took 3.507102769s for pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.101551  585025 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rjp4j" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.108084  585025 pod_ready.go:93] pod "kube-proxy-rjp4j" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:27.108116  585025 pod_ready.go:82] duration metric: took 6.557141ms for pod "kube-proxy-rjp4j" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.108131  585025 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.112915  585025 pod_ready.go:93] pod "kube-scheduler-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:27.112942  585025 pod_ready.go:82] duration metric: took 4.801285ms for pod "kube-scheduler-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.112955  585025 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.119094  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:26.876377  585602 cri.go:89] found id: ""
	I1205 20:32:26.876406  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.876416  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:26.876422  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:26.876491  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:26.913817  585602 cri.go:89] found id: ""
	I1205 20:32:26.913845  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.913854  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:26.913862  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:26.913936  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:26.955739  585602 cri.go:89] found id: ""
	I1205 20:32:26.955775  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.955788  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:26.955798  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:26.955863  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:26.996191  585602 cri.go:89] found id: ""
	I1205 20:32:26.996223  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.996234  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:26.996242  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:26.996341  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:27.040905  585602 cri.go:89] found id: ""
	I1205 20:32:27.040935  585602 logs.go:282] 0 containers: []
	W1205 20:32:27.040947  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:27.040958  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:27.040973  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:27.098103  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:27.098140  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:27.116538  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:27.116574  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:27.204154  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:27.204187  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:27.204208  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:27.300380  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:27.300431  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:29.840944  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:29.855784  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:29.855869  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:29.893728  585602 cri.go:89] found id: ""
	I1205 20:32:29.893765  585602 logs.go:282] 0 containers: []
	W1205 20:32:29.893777  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:29.893786  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:29.893867  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:29.930138  585602 cri.go:89] found id: ""
	I1205 20:32:29.930176  585602 logs.go:282] 0 containers: []
	W1205 20:32:29.930186  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:29.930193  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:29.930248  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:29.966340  585602 cri.go:89] found id: ""
	I1205 20:32:29.966371  585602 logs.go:282] 0 containers: []
	W1205 20:32:29.966380  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:29.966387  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:29.966463  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:30.003868  585602 cri.go:89] found id: ""
	I1205 20:32:30.003900  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.003920  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:30.003928  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:30.004001  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:30.044332  585602 cri.go:89] found id: ""
	I1205 20:32:30.044363  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.044373  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:30.044380  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:30.044445  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:30.088044  585602 cri.go:89] found id: ""
	I1205 20:32:30.088085  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.088098  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:30.088106  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:30.088173  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:30.124221  585602 cri.go:89] found id: ""
	I1205 20:32:30.124248  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.124258  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:30.124285  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:30.124357  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:30.162092  585602 cri.go:89] found id: ""
	I1205 20:32:30.162121  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.162133  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:30.162146  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:30.162162  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:30.218526  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:30.218567  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:30.232240  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:30.232292  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:30.308228  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:30.308260  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:30.308296  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:30.389348  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:30.389391  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:29.177093  585929 pod_ready.go:93] pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:29.177118  585929 pod_ready.go:82] duration metric: took 1.508590352s for pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.177129  585929 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.185839  585929 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:29.185869  585929 pod_ready.go:82] duration metric: took 8.733028ms for pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.185883  585929 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.191924  585929 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:29.191950  585929 pod_ready.go:82] duration metric: took 6.059525ms for pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.191963  585929 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5vdcq" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.256484  585929 pod_ready.go:93] pod "kube-proxy-5vdcq" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:29.256510  585929 pod_ready.go:82] duration metric: took 64.540117ms for pod "kube-proxy-5vdcq" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.256521  585929 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.656933  585929 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:29.656961  585929 pod_ready.go:82] duration metric: took 400.432279ms for pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.656972  585929 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:31.664326  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:31.043630  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:33.044035  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:35.542861  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:31.120200  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:33.120303  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:35.120532  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:32.934497  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:32.949404  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:32.949488  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:33.006117  585602 cri.go:89] found id: ""
	I1205 20:32:33.006148  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.006157  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:33.006163  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:33.006231  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:33.064907  585602 cri.go:89] found id: ""
	I1205 20:32:33.064945  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.064958  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:33.064966  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:33.065031  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:33.101268  585602 cri.go:89] found id: ""
	I1205 20:32:33.101295  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.101304  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:33.101310  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:33.101378  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:33.141705  585602 cri.go:89] found id: ""
	I1205 20:32:33.141733  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.141743  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:33.141750  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:33.141810  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:33.180983  585602 cri.go:89] found id: ""
	I1205 20:32:33.181011  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.181020  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:33.181026  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:33.181086  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:33.220742  585602 cri.go:89] found id: ""
	I1205 20:32:33.220779  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.220791  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:33.220799  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:33.220871  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:33.255980  585602 cri.go:89] found id: ""
	I1205 20:32:33.256009  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.256017  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:33.256024  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:33.256080  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:33.292978  585602 cri.go:89] found id: ""
	I1205 20:32:33.293005  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.293013  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:33.293023  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:33.293034  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:33.347167  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:33.347213  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:33.361367  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:33.361408  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:33.435871  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:33.435915  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:33.435932  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:33.518835  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:33.518880  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:36.066359  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:36.080867  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:36.080947  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:36.117647  585602 cri.go:89] found id: ""
	I1205 20:32:36.117678  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.117689  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:36.117697  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:36.117763  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:36.154376  585602 cri.go:89] found id: ""
	I1205 20:32:36.154412  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.154428  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:36.154436  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:36.154498  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:36.193225  585602 cri.go:89] found id: ""
	I1205 20:32:36.193261  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.193274  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:36.193282  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:36.193347  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:36.230717  585602 cri.go:89] found id: ""
	I1205 20:32:36.230748  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.230758  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:36.230764  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:36.230817  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:36.270186  585602 cri.go:89] found id: ""
	I1205 20:32:36.270238  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.270252  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:36.270262  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:36.270340  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:36.306378  585602 cri.go:89] found id: ""
	I1205 20:32:36.306425  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.306438  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:36.306447  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:36.306531  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:36.342256  585602 cri.go:89] found id: ""
	I1205 20:32:36.342289  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.342300  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:36.342306  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:36.342380  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:36.380684  585602 cri.go:89] found id: ""
	I1205 20:32:36.380718  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.380732  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:36.380745  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:36.380768  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:36.436066  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:36.436109  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:36.450255  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:36.450285  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:36.521857  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:36.521883  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:36.521897  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:36.608349  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:36.608395  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:34.163870  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:36.164890  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:38.042889  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:40.543140  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:37.619863  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:40.120462  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:39.157366  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:39.171267  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:39.171357  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:39.214459  585602 cri.go:89] found id: ""
	I1205 20:32:39.214490  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.214520  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:39.214528  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:39.214583  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:39.250312  585602 cri.go:89] found id: ""
	I1205 20:32:39.250352  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.250366  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:39.250375  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:39.250437  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:39.286891  585602 cri.go:89] found id: ""
	I1205 20:32:39.286932  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.286944  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:39.286952  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:39.287019  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:39.323923  585602 cri.go:89] found id: ""
	I1205 20:32:39.323958  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.323970  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:39.323979  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:39.324053  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:39.360280  585602 cri.go:89] found id: ""
	I1205 20:32:39.360322  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.360331  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:39.360337  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:39.360403  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:39.397599  585602 cri.go:89] found id: ""
	I1205 20:32:39.397637  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.397650  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:39.397659  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:39.397731  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:39.435132  585602 cri.go:89] found id: ""
	I1205 20:32:39.435159  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.435168  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:39.435174  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:39.435241  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:39.470653  585602 cri.go:89] found id: ""
	I1205 20:32:39.470682  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.470690  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:39.470700  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:39.470714  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:39.511382  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:39.511413  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:39.563955  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:39.563994  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:39.578015  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:39.578044  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:39.658505  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:39.658535  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:39.658550  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:38.665320  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:41.165054  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:42.545231  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:45.042231  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:42.620687  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:45.120915  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:42.248607  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:42.263605  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:42.263688  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:42.305480  585602 cri.go:89] found id: ""
	I1205 20:32:42.305508  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.305519  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:42.305527  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:42.305595  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:42.339969  585602 cri.go:89] found id: ""
	I1205 20:32:42.340001  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.340010  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:42.340016  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:42.340090  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:42.381594  585602 cri.go:89] found id: ""
	I1205 20:32:42.381630  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.381643  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:42.381651  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:42.381771  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:42.435039  585602 cri.go:89] found id: ""
	I1205 20:32:42.435072  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.435085  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:42.435093  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:42.435162  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:42.470567  585602 cri.go:89] found id: ""
	I1205 20:32:42.470595  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.470604  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:42.470610  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:42.470674  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:42.510695  585602 cri.go:89] found id: ""
	I1205 20:32:42.510723  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.510731  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:42.510738  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:42.510793  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:42.547687  585602 cri.go:89] found id: ""
	I1205 20:32:42.547711  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.547718  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:42.547735  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:42.547784  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:42.587160  585602 cri.go:89] found id: ""
	I1205 20:32:42.587191  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.587199  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:42.587211  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:42.587225  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:42.669543  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:42.669587  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:42.717795  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:42.717833  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:42.772644  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:42.772696  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:42.788443  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:42.788480  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:42.861560  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:45.362758  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:45.377178  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:45.377266  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:45.413055  585602 cri.go:89] found id: ""
	I1205 20:32:45.413088  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.413102  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:45.413111  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:45.413176  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:45.453769  585602 cri.go:89] found id: ""
	I1205 20:32:45.453799  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.453808  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:45.453813  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:45.453879  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:45.499481  585602 cri.go:89] found id: ""
	I1205 20:32:45.499511  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.499522  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:45.499531  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:45.499598  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:45.537603  585602 cri.go:89] found id: ""
	I1205 20:32:45.537638  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.537647  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:45.537653  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:45.537707  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:45.572430  585602 cri.go:89] found id: ""
	I1205 20:32:45.572463  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.572471  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:45.572479  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:45.572556  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:45.610349  585602 cri.go:89] found id: ""
	I1205 20:32:45.610387  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.610398  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:45.610406  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:45.610476  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:45.649983  585602 cri.go:89] found id: ""
	I1205 20:32:45.650018  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.650031  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:45.650038  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:45.650113  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:45.689068  585602 cri.go:89] found id: ""
	I1205 20:32:45.689099  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.689107  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:45.689118  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:45.689131  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:45.743715  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:45.743758  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:45.759803  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:45.759834  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:45.835107  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:45.835133  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:45.835146  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:45.914590  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:45.914632  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:43.665616  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:46.164064  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:47.045269  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:49.544519  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:47.619099  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:49.627948  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:48.456633  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:48.475011  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:48.475086  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:48.512878  585602 cri.go:89] found id: ""
	I1205 20:32:48.512913  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.512925  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:48.512933  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:48.513002  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:48.551708  585602 cri.go:89] found id: ""
	I1205 20:32:48.551737  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.551744  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:48.551751  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:48.551805  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:48.590765  585602 cri.go:89] found id: ""
	I1205 20:32:48.590791  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.590800  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:48.590806  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:48.590859  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:48.629447  585602 cri.go:89] found id: ""
	I1205 20:32:48.629473  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.629481  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:48.629487  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:48.629540  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:48.667299  585602 cri.go:89] found id: ""
	I1205 20:32:48.667329  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.667339  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:48.667347  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:48.667414  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:48.703771  585602 cri.go:89] found id: ""
	I1205 20:32:48.703816  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.703830  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:48.703841  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:48.703911  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:48.747064  585602 cri.go:89] found id: ""
	I1205 20:32:48.747098  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.747111  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:48.747118  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:48.747186  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:48.786608  585602 cri.go:89] found id: ""
	I1205 20:32:48.786649  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.786663  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:48.786684  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:48.786700  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:48.860834  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:48.860866  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:48.860881  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:48.944029  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:48.944082  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:48.982249  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:48.982284  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:49.036460  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:49.036509  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:51.556456  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:51.571498  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:51.571590  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:51.616890  585602 cri.go:89] found id: ""
	I1205 20:32:51.616924  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.616934  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:51.616942  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:51.617008  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:51.660397  585602 cri.go:89] found id: ""
	I1205 20:32:51.660433  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.660445  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:51.660453  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:51.660543  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:51.698943  585602 cri.go:89] found id: ""
	I1205 20:32:51.698973  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.698981  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:51.698988  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:51.699041  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:51.737254  585602 cri.go:89] found id: ""
	I1205 20:32:51.737288  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.737297  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:51.737310  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:51.737366  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:51.775560  585602 cri.go:89] found id: ""
	I1205 20:32:51.775592  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.775600  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:51.775606  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:51.775681  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:51.814314  585602 cri.go:89] found id: ""
	I1205 20:32:51.814370  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.814383  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:51.814393  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:51.814464  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:51.849873  585602 cri.go:89] found id: ""
	I1205 20:32:51.849913  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.849935  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:51.849944  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:51.850018  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:48.164562  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:50.664498  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:52.044224  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:54.542721  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:52.118857  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:54.120231  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:51.891360  585602 cri.go:89] found id: ""
	I1205 20:32:51.891388  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.891400  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:51.891412  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:51.891429  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:51.943812  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:51.943854  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:51.959119  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:51.959152  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:52.036014  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:52.036040  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:52.036059  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:52.114080  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:52.114122  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:54.657243  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:54.672319  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:54.672407  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:54.708446  585602 cri.go:89] found id: ""
	I1205 20:32:54.708475  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.708484  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:54.708491  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:54.708569  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:54.747309  585602 cri.go:89] found id: ""
	I1205 20:32:54.747347  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.747359  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:54.747370  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:54.747451  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:54.790742  585602 cri.go:89] found id: ""
	I1205 20:32:54.790772  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.790781  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:54.790787  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:54.790853  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:54.828857  585602 cri.go:89] found id: ""
	I1205 20:32:54.828885  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.828894  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:54.828902  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:54.828964  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:54.867691  585602 cri.go:89] found id: ""
	I1205 20:32:54.867729  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.867740  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:54.867747  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:54.867819  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:54.907216  585602 cri.go:89] found id: ""
	I1205 20:32:54.907242  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.907249  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:54.907256  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:54.907308  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:54.945800  585602 cri.go:89] found id: ""
	I1205 20:32:54.945827  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.945837  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:54.945844  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:54.945895  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:54.993176  585602 cri.go:89] found id: ""
	I1205 20:32:54.993216  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.993228  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:54.993242  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:54.993258  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:55.045797  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:55.045835  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:55.060103  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:55.060136  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:55.129440  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:55.129467  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:55.129485  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:55.214949  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:55.214999  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:53.164619  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:55.663605  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:56.543148  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:58.543374  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:00.543687  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:56.620220  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:58.620759  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:00.626643  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:57.755086  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:57.769533  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:57.769622  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:57.807812  585602 cri.go:89] found id: ""
	I1205 20:32:57.807847  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.807858  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:57.807869  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:57.807941  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:57.846179  585602 cri.go:89] found id: ""
	I1205 20:32:57.846209  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.846223  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:57.846232  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:57.846305  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:57.881438  585602 cri.go:89] found id: ""
	I1205 20:32:57.881473  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.881482  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:57.881496  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:57.881553  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:57.918242  585602 cri.go:89] found id: ""
	I1205 20:32:57.918283  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.918294  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:57.918302  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:57.918378  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:57.962825  585602 cri.go:89] found id: ""
	I1205 20:32:57.962863  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.962873  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:57.962879  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:57.962955  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:58.004655  585602 cri.go:89] found id: ""
	I1205 20:32:58.004699  585602 logs.go:282] 0 containers: []
	W1205 20:32:58.004711  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:58.004731  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:58.004802  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:58.043701  585602 cri.go:89] found id: ""
	I1205 20:32:58.043730  585602 logs.go:282] 0 containers: []
	W1205 20:32:58.043738  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:58.043744  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:58.043802  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:58.081400  585602 cri.go:89] found id: ""
	I1205 20:32:58.081437  585602 logs.go:282] 0 containers: []
	W1205 20:32:58.081450  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:58.081463  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:58.081486  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:58.135531  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:58.135573  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:58.149962  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:58.149998  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:58.227810  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:58.227834  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:58.227849  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:58.308173  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:58.308219  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:00.848019  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:00.863423  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:00.863496  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:00.902526  585602 cri.go:89] found id: ""
	I1205 20:33:00.902553  585602 logs.go:282] 0 containers: []
	W1205 20:33:00.902561  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:00.902567  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:00.902621  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:00.939891  585602 cri.go:89] found id: ""
	I1205 20:33:00.939932  585602 logs.go:282] 0 containers: []
	W1205 20:33:00.939942  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:00.939948  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:00.940022  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:00.981645  585602 cri.go:89] found id: ""
	I1205 20:33:00.981676  585602 logs.go:282] 0 containers: []
	W1205 20:33:00.981684  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:00.981691  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:00.981745  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:01.027753  585602 cri.go:89] found id: ""
	I1205 20:33:01.027780  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.027789  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:01.027795  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:01.027877  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:01.064529  585602 cri.go:89] found id: ""
	I1205 20:33:01.064559  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.064567  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:01.064574  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:01.064628  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:01.102239  585602 cri.go:89] found id: ""
	I1205 20:33:01.102272  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.102281  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:01.102287  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:01.102357  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:01.139723  585602 cri.go:89] found id: ""
	I1205 20:33:01.139760  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.139770  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:01.139778  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:01.139845  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:01.176172  585602 cri.go:89] found id: ""
	I1205 20:33:01.176198  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.176207  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:01.176216  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:01.176231  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:01.230085  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:01.230133  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:01.245574  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:01.245617  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:01.340483  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:01.340520  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:01.340537  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:01.416925  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:01.416972  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:58.164852  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:00.664376  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:02.677134  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:03.042415  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:05.543101  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:03.119783  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:05.120647  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:03.958855  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:03.974024  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:03.974096  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:04.021407  585602 cri.go:89] found id: ""
	I1205 20:33:04.021442  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.021451  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:04.021458  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:04.021523  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:04.063385  585602 cri.go:89] found id: ""
	I1205 20:33:04.063414  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.063423  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:04.063430  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:04.063488  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:04.103693  585602 cri.go:89] found id: ""
	I1205 20:33:04.103735  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.103747  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:04.103756  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:04.103815  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:04.143041  585602 cri.go:89] found id: ""
	I1205 20:33:04.143072  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.143100  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:04.143109  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:04.143179  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:04.180668  585602 cri.go:89] found id: ""
	I1205 20:33:04.180702  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.180712  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:04.180718  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:04.180778  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:04.221848  585602 cri.go:89] found id: ""
	I1205 20:33:04.221885  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.221894  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:04.221901  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:04.222018  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:04.263976  585602 cri.go:89] found id: ""
	I1205 20:33:04.264014  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.264024  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:04.264030  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:04.264097  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:04.298698  585602 cri.go:89] found id: ""
	I1205 20:33:04.298726  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.298737  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:04.298751  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:04.298767  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:04.347604  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:04.347659  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:04.361325  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:04.361361  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:04.437679  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:04.437704  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:04.437720  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:04.520043  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:04.520103  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:05.163317  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:07.165936  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:08.043365  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:10.544442  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:07.122134  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:09.620228  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:07.070687  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:07.085290  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:07.085367  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:07.126233  585602 cri.go:89] found id: ""
	I1205 20:33:07.126265  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.126276  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:07.126285  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:07.126346  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:07.163004  585602 cri.go:89] found id: ""
	I1205 20:33:07.163040  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.163053  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:07.163061  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:07.163126  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:07.201372  585602 cri.go:89] found id: ""
	I1205 20:33:07.201412  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.201425  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:07.201435  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:07.201509  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:07.237762  585602 cri.go:89] found id: ""
	I1205 20:33:07.237795  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.237807  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:07.237815  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:07.237885  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:07.273940  585602 cri.go:89] found id: ""
	I1205 20:33:07.273976  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.273985  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:07.273995  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:07.274057  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:07.311028  585602 cri.go:89] found id: ""
	I1205 20:33:07.311061  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.311070  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:07.311076  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:07.311131  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:07.347386  585602 cri.go:89] found id: ""
	I1205 20:33:07.347422  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.347433  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:07.347441  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:07.347503  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:07.386412  585602 cri.go:89] found id: ""
	I1205 20:33:07.386446  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.386458  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:07.386471  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:07.386489  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:07.430250  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:07.430280  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:07.483936  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:07.483982  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:07.498201  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:07.498236  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:07.576741  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:07.576767  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:07.576780  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:10.164792  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:10.178516  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:10.178596  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:10.215658  585602 cri.go:89] found id: ""
	I1205 20:33:10.215692  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.215702  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:10.215711  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:10.215779  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:10.251632  585602 cri.go:89] found id: ""
	I1205 20:33:10.251671  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.251683  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:10.251691  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:10.251763  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:10.295403  585602 cri.go:89] found id: ""
	I1205 20:33:10.295435  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.295453  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:10.295460  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:10.295513  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:10.329747  585602 cri.go:89] found id: ""
	I1205 20:33:10.329778  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.329787  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:10.329793  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:10.329871  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:10.369975  585602 cri.go:89] found id: ""
	I1205 20:33:10.370016  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.370028  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:10.370036  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:10.370104  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:10.408146  585602 cri.go:89] found id: ""
	I1205 20:33:10.408183  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.408196  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:10.408204  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:10.408288  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:10.443803  585602 cri.go:89] found id: ""
	I1205 20:33:10.443839  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.443850  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:10.443858  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:10.443932  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:10.481784  585602 cri.go:89] found id: ""
	I1205 20:33:10.481826  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.481840  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:10.481854  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:10.481872  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:10.531449  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:10.531498  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:10.549258  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:10.549288  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:10.620162  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:10.620189  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:10.620206  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:10.704656  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:10.704706  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:09.663940  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:12.163534  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:13.043720  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:15.542736  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:12.118781  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:14.619996  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:13.251518  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:13.264731  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:13.264815  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:13.297816  585602 cri.go:89] found id: ""
	I1205 20:33:13.297846  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.297855  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:13.297861  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:13.297918  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:13.330696  585602 cri.go:89] found id: ""
	I1205 20:33:13.330724  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.330732  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:13.330738  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:13.330789  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:13.366257  585602 cri.go:89] found id: ""
	I1205 20:33:13.366304  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.366315  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:13.366321  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:13.366385  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:13.403994  585602 cri.go:89] found id: ""
	I1205 20:33:13.404030  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.404042  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:13.404051  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:13.404121  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:13.450160  585602 cri.go:89] found id: ""
	I1205 20:33:13.450189  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.450198  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:13.450205  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:13.450262  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:13.502593  585602 cri.go:89] found id: ""
	I1205 20:33:13.502629  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.502640  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:13.502650  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:13.502720  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:13.548051  585602 cri.go:89] found id: ""
	I1205 20:33:13.548084  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.548095  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:13.548103  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:13.548166  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:13.593913  585602 cri.go:89] found id: ""
	I1205 20:33:13.593947  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.593960  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:13.593975  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:13.593997  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:13.674597  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:13.674628  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:13.674647  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:13.760747  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:13.760796  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:13.804351  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:13.804383  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:13.856896  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:13.856958  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:16.372754  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:16.387165  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:16.387242  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:16.426612  585602 cri.go:89] found id: ""
	I1205 20:33:16.426655  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.426668  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:16.426676  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:16.426734  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:16.461936  585602 cri.go:89] found id: ""
	I1205 20:33:16.461974  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.461988  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:16.461997  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:16.462060  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:16.498010  585602 cri.go:89] found id: ""
	I1205 20:33:16.498044  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.498062  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:16.498069  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:16.498133  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:16.533825  585602 cri.go:89] found id: ""
	I1205 20:33:16.533854  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.533863  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:16.533869  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:16.533941  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:16.570834  585602 cri.go:89] found id: ""
	I1205 20:33:16.570875  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.570887  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:16.570896  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:16.570968  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:16.605988  585602 cri.go:89] found id: ""
	I1205 20:33:16.606026  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.606038  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:16.606047  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:16.606140  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:16.645148  585602 cri.go:89] found id: ""
	I1205 20:33:16.645178  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.645188  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:16.645195  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:16.645261  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:16.682449  585602 cri.go:89] found id: ""
	I1205 20:33:16.682479  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.682491  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:16.682502  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:16.682519  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:16.696944  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:16.696980  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:16.777034  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:16.777064  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:16.777078  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:14.164550  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:16.664527  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:17.543278  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:19.543404  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:16.621517  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:18.626303  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:16.854812  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:16.854880  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:16.905101  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:16.905131  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:19.463427  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:19.477135  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:19.477233  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:19.529213  585602 cri.go:89] found id: ""
	I1205 20:33:19.529248  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.529264  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:19.529274  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:19.529359  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:19.575419  585602 cri.go:89] found id: ""
	I1205 20:33:19.575453  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.575465  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:19.575474  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:19.575546  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:19.616657  585602 cri.go:89] found id: ""
	I1205 20:33:19.616691  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.616704  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:19.616713  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:19.616787  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:19.653142  585602 cri.go:89] found id: ""
	I1205 20:33:19.653177  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.653189  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:19.653198  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:19.653267  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:19.690504  585602 cri.go:89] found id: ""
	I1205 20:33:19.690544  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.690555  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:19.690563  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:19.690635  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:19.730202  585602 cri.go:89] found id: ""
	I1205 20:33:19.730229  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.730237  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:19.730245  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:19.730302  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:19.767212  585602 cri.go:89] found id: ""
	I1205 20:33:19.767243  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.767255  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:19.767264  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:19.767336  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:19.803089  585602 cri.go:89] found id: ""
	I1205 20:33:19.803125  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.803137  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:19.803163  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:19.803180  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:19.884542  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:19.884589  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:19.925257  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:19.925303  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:19.980457  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:19.980510  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:19.997026  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:19.997057  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:20.075062  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:18.664915  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:21.163064  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:22.042272  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:24.043822  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:21.120054  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:23.120944  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:25.618857  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:22.575469  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:22.588686  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:22.588768  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:22.622824  585602 cri.go:89] found id: ""
	I1205 20:33:22.622860  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.622868  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:22.622874  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:22.622931  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:22.659964  585602 cri.go:89] found id: ""
	I1205 20:33:22.660059  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.660074  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:22.660085  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:22.660153  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:22.695289  585602 cri.go:89] found id: ""
	I1205 20:33:22.695325  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.695337  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:22.695345  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:22.695417  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:22.734766  585602 cri.go:89] found id: ""
	I1205 20:33:22.734801  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.734813  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:22.734821  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:22.734896  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:22.773778  585602 cri.go:89] found id: ""
	I1205 20:33:22.773806  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.773818  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:22.773826  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:22.773899  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:22.811468  585602 cri.go:89] found id: ""
	I1205 20:33:22.811503  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.811514  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:22.811521  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:22.811591  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:22.852153  585602 cri.go:89] found id: ""
	I1205 20:33:22.852210  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.852221  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:22.852227  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:22.852318  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:22.888091  585602 cri.go:89] found id: ""
	I1205 20:33:22.888120  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.888129  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:22.888139  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:22.888155  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:22.943210  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:22.943252  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:22.958356  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:22.958393  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:23.026732  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:23.026770  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:23.026788  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:23.106356  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:23.106395  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:25.650832  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:25.665392  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:25.665475  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:25.701109  585602 cri.go:89] found id: ""
	I1205 20:33:25.701146  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.701155  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:25.701162  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:25.701231  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:25.738075  585602 cri.go:89] found id: ""
	I1205 20:33:25.738108  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.738117  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:25.738123  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:25.738176  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:25.775031  585602 cri.go:89] found id: ""
	I1205 20:33:25.775078  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.775090  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:25.775100  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:25.775173  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:25.811343  585602 cri.go:89] found id: ""
	I1205 20:33:25.811376  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.811386  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:25.811395  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:25.811471  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:25.846635  585602 cri.go:89] found id: ""
	I1205 20:33:25.846674  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.846684  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:25.846692  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:25.846766  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:25.881103  585602 cri.go:89] found id: ""
	I1205 20:33:25.881136  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.881145  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:25.881151  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:25.881224  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:25.917809  585602 cri.go:89] found id: ""
	I1205 20:33:25.917844  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.917855  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:25.917864  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:25.917936  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:25.955219  585602 cri.go:89] found id: ""
	I1205 20:33:25.955245  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.955254  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:25.955264  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:25.955276  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:26.007016  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:26.007059  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:26.021554  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:26.021601  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:26.099290  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:26.099321  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:26.099334  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:26.182955  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:26.182993  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:23.164876  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:25.665151  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:26.542519  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:28.542856  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:30.542941  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:27.621687  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:30.119140  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:28.725201  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:28.739515  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:28.739602  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:28.778187  585602 cri.go:89] found id: ""
	I1205 20:33:28.778230  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.778242  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:28.778249  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:28.778315  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:28.815788  585602 cri.go:89] found id: ""
	I1205 20:33:28.815826  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.815838  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:28.815845  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:28.815912  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:28.852222  585602 cri.go:89] found id: ""
	I1205 20:33:28.852251  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.852261  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:28.852289  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:28.852362  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:28.889742  585602 cri.go:89] found id: ""
	I1205 20:33:28.889776  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.889787  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:28.889794  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:28.889859  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:28.926872  585602 cri.go:89] found id: ""
	I1205 20:33:28.926903  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.926912  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:28.926919  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:28.926972  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:28.963380  585602 cri.go:89] found id: ""
	I1205 20:33:28.963418  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.963432  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:28.963441  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:28.963509  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:29.000711  585602 cri.go:89] found id: ""
	I1205 20:33:29.000746  585602 logs.go:282] 0 containers: []
	W1205 20:33:29.000764  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:29.000772  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:29.000848  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:29.035934  585602 cri.go:89] found id: ""
	I1205 20:33:29.035963  585602 logs.go:282] 0 containers: []
	W1205 20:33:29.035974  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:29.035987  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:29.036003  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:29.091336  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:29.091382  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:29.105784  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:29.105814  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:29.182038  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:29.182078  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:29.182095  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:29.261107  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:29.261153  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:31.802911  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:31.817285  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:31.817369  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:28.164470  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:30.664154  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:33.043654  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:35.044730  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:32.120759  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:34.619618  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:31.854865  585602 cri.go:89] found id: ""
	I1205 20:33:31.854900  585602 logs.go:282] 0 containers: []
	W1205 20:33:31.854914  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:31.854922  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:31.854995  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:31.893928  585602 cri.go:89] found id: ""
	I1205 20:33:31.893964  585602 logs.go:282] 0 containers: []
	W1205 20:33:31.893977  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:31.893984  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:31.894053  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:31.929490  585602 cri.go:89] found id: ""
	I1205 20:33:31.929527  585602 logs.go:282] 0 containers: []
	W1205 20:33:31.929540  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:31.929548  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:31.929637  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:31.964185  585602 cri.go:89] found id: ""
	I1205 20:33:31.964211  585602 logs.go:282] 0 containers: []
	W1205 20:33:31.964219  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:31.964225  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:31.964291  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:32.002708  585602 cri.go:89] found id: ""
	I1205 20:33:32.002748  585602 logs.go:282] 0 containers: []
	W1205 20:33:32.002760  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:32.002768  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:32.002847  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:32.040619  585602 cri.go:89] found id: ""
	I1205 20:33:32.040712  585602 logs.go:282] 0 containers: []
	W1205 20:33:32.040740  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:32.040758  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:32.040839  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:32.079352  585602 cri.go:89] found id: ""
	I1205 20:33:32.079390  585602 logs.go:282] 0 containers: []
	W1205 20:33:32.079404  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:32.079412  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:32.079484  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:32.117560  585602 cri.go:89] found id: ""
	I1205 20:33:32.117596  585602 logs.go:282] 0 containers: []
	W1205 20:33:32.117608  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:32.117629  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:32.117653  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:32.172639  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:32.172686  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:32.187687  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:32.187727  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:32.265000  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:32.265034  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:32.265051  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:32.348128  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:32.348176  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:34.890144  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:34.903953  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:34.904032  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:34.939343  585602 cri.go:89] found id: ""
	I1205 20:33:34.939374  585602 logs.go:282] 0 containers: []
	W1205 20:33:34.939383  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:34.939389  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:34.939444  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:34.978225  585602 cri.go:89] found id: ""
	I1205 20:33:34.978266  585602 logs.go:282] 0 containers: []
	W1205 20:33:34.978278  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:34.978286  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:34.978363  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:35.015918  585602 cri.go:89] found id: ""
	I1205 20:33:35.015950  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.015960  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:35.015966  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:35.016032  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:35.053222  585602 cri.go:89] found id: ""
	I1205 20:33:35.053249  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.053257  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:35.053264  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:35.053320  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:35.088369  585602 cri.go:89] found id: ""
	I1205 20:33:35.088401  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.088412  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:35.088421  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:35.088498  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:35.135290  585602 cri.go:89] found id: ""
	I1205 20:33:35.135327  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.135338  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:35.135346  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:35.135412  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:35.174959  585602 cri.go:89] found id: ""
	I1205 20:33:35.174996  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.175008  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:35.175017  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:35.175097  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:35.215101  585602 cri.go:89] found id: ""
	I1205 20:33:35.215134  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.215143  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:35.215152  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:35.215167  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:35.269372  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:35.269414  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:35.285745  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:35.285776  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:35.364774  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:35.364807  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:35.364824  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:35.445932  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:35.445980  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:33.163790  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:35.163966  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:37.164819  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:37.047128  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:39.543051  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:36.620450  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:39.120055  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:37.996837  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:38.010545  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:38.010612  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:38.048292  585602 cri.go:89] found id: ""
	I1205 20:33:38.048334  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.048350  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:38.048360  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:38.048429  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:38.086877  585602 cri.go:89] found id: ""
	I1205 20:33:38.086911  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.086921  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:38.086927  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:38.087001  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:38.122968  585602 cri.go:89] found id: ""
	I1205 20:33:38.122999  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.123010  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:38.123018  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:38.123082  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:38.164901  585602 cri.go:89] found id: ""
	I1205 20:33:38.164940  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.164949  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:38.164955  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:38.165006  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:38.200697  585602 cri.go:89] found id: ""
	I1205 20:33:38.200725  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.200734  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:38.200740  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:38.200803  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:38.240306  585602 cri.go:89] found id: ""
	I1205 20:33:38.240338  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.240347  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:38.240354  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:38.240424  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:38.275788  585602 cri.go:89] found id: ""
	I1205 20:33:38.275823  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.275835  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:38.275844  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:38.275917  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:38.311431  585602 cri.go:89] found id: ""
	I1205 20:33:38.311468  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.311480  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:38.311493  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:38.311507  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:38.361472  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:38.361515  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:38.375970  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:38.376004  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:38.450913  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:38.450941  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:38.450961  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:38.527620  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:38.527666  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:41.072438  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:41.086085  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:41.086168  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:41.123822  585602 cri.go:89] found id: ""
	I1205 20:33:41.123852  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.123861  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:41.123868  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:41.123919  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:41.160343  585602 cri.go:89] found id: ""
	I1205 20:33:41.160371  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.160380  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:41.160389  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:41.160457  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:41.198212  585602 cri.go:89] found id: ""
	I1205 20:33:41.198240  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.198249  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:41.198255  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:41.198309  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:41.233793  585602 cri.go:89] found id: ""
	I1205 20:33:41.233824  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.233832  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:41.233838  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:41.233890  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:41.269397  585602 cri.go:89] found id: ""
	I1205 20:33:41.269435  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.269447  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:41.269457  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:41.269529  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:41.303079  585602 cri.go:89] found id: ""
	I1205 20:33:41.303116  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.303128  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:41.303136  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:41.303196  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:41.337784  585602 cri.go:89] found id: ""
	I1205 20:33:41.337817  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.337826  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:41.337832  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:41.337901  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:41.371410  585602 cri.go:89] found id: ""
	I1205 20:33:41.371438  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.371446  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:41.371456  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:41.371467  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:41.422768  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:41.422807  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:41.437427  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:41.437461  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:41.510875  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:41.510898  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:41.510915  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:41.590783  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:41.590826  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:39.667344  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:42.172287  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:42.043022  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:44.543222  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:41.120670  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:43.622132  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:45.623483  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:44.136390  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:44.149935  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:44.150006  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:44.187807  585602 cri.go:89] found id: ""
	I1205 20:33:44.187846  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.187858  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:44.187866  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:44.187933  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:44.224937  585602 cri.go:89] found id: ""
	I1205 20:33:44.224965  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.224973  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:44.224978  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:44.225040  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:44.260230  585602 cri.go:89] found id: ""
	I1205 20:33:44.260274  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.260287  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:44.260297  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:44.260439  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:44.296410  585602 cri.go:89] found id: ""
	I1205 20:33:44.296439  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.296449  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:44.296455  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:44.296507  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:44.332574  585602 cri.go:89] found id: ""
	I1205 20:33:44.332623  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.332635  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:44.332642  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:44.332709  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:44.368925  585602 cri.go:89] found id: ""
	I1205 20:33:44.368973  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.368985  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:44.368994  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:44.369068  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:44.410041  585602 cri.go:89] found id: ""
	I1205 20:33:44.410075  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.410088  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:44.410095  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:44.410165  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:44.454254  585602 cri.go:89] found id: ""
	I1205 20:33:44.454295  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.454316  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:44.454330  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:44.454346  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:44.507604  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:44.507669  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:44.525172  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:44.525219  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:44.599417  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:44.599446  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:44.599465  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:44.681624  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:44.681685  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:44.664942  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:47.163452  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:47.043225  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:49.044675  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:48.120302  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:50.120568  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:47.230092  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:47.243979  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:47.244076  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:47.280346  585602 cri.go:89] found id: ""
	I1205 20:33:47.280376  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.280385  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:47.280392  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:47.280448  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:47.316454  585602 cri.go:89] found id: ""
	I1205 20:33:47.316479  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.316487  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:47.316493  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:47.316546  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:47.353339  585602 cri.go:89] found id: ""
	I1205 20:33:47.353374  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.353386  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:47.353395  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:47.353466  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:47.388256  585602 cri.go:89] found id: ""
	I1205 20:33:47.388319  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.388330  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:47.388339  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:47.388408  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:47.424907  585602 cri.go:89] found id: ""
	I1205 20:33:47.424942  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.424953  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:47.424961  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:47.425035  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:47.461386  585602 cri.go:89] found id: ""
	I1205 20:33:47.461416  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.461425  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:47.461431  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:47.461485  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:47.501092  585602 cri.go:89] found id: ""
	I1205 20:33:47.501121  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.501130  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:47.501136  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:47.501189  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:47.559478  585602 cri.go:89] found id: ""
	I1205 20:33:47.559507  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.559520  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:47.559533  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:47.559551  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:47.609761  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:47.609800  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:47.626579  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:47.626606  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:47.713490  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:47.713520  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:47.713540  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:47.795346  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:47.795398  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:50.339441  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:50.353134  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:50.353216  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:50.393950  585602 cri.go:89] found id: ""
	I1205 20:33:50.393979  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.393990  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:50.394007  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:50.394074  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:50.431166  585602 cri.go:89] found id: ""
	I1205 20:33:50.431201  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.431212  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:50.431221  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:50.431291  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:50.472641  585602 cri.go:89] found id: ""
	I1205 20:33:50.472674  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.472684  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:50.472692  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:50.472763  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:50.512111  585602 cri.go:89] found id: ""
	I1205 20:33:50.512152  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.512165  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:50.512173  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:50.512247  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:50.554500  585602 cri.go:89] found id: ""
	I1205 20:33:50.554536  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.554549  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:50.554558  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:50.554625  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:50.590724  585602 cri.go:89] found id: ""
	I1205 20:33:50.590755  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.590764  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:50.590771  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:50.590837  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:50.628640  585602 cri.go:89] found id: ""
	I1205 20:33:50.628666  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.628675  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:50.628681  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:50.628732  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:50.670009  585602 cri.go:89] found id: ""
	I1205 20:33:50.670039  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.670047  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:50.670063  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:50.670075  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:50.684236  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:50.684290  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:50.757761  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:50.757790  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:50.757813  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:50.839665  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:50.839720  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:50.881087  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:50.881122  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:49.164986  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:51.665655  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:51.543286  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:53.543689  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:52.621297  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:54.621764  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:53.433345  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:53.446747  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:53.446819  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:53.482928  585602 cri.go:89] found id: ""
	I1205 20:33:53.482967  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.482979  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:53.482988  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:53.483048  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:53.519096  585602 cri.go:89] found id: ""
	I1205 20:33:53.519128  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.519136  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:53.519142  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:53.519196  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:53.556207  585602 cri.go:89] found id: ""
	I1205 20:33:53.556233  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.556243  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:53.556249  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:53.556346  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:53.589708  585602 cri.go:89] found id: ""
	I1205 20:33:53.589736  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.589745  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:53.589758  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:53.589813  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:53.630344  585602 cri.go:89] found id: ""
	I1205 20:33:53.630371  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.630380  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:53.630386  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:53.630438  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:53.668895  585602 cri.go:89] found id: ""
	I1205 20:33:53.668921  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.668929  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:53.668935  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:53.668987  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:53.706601  585602 cri.go:89] found id: ""
	I1205 20:33:53.706628  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.706638  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:53.706644  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:53.706704  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:53.744922  585602 cri.go:89] found id: ""
	I1205 20:33:53.744952  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.744960  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:53.744970  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:53.744989  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:53.823816  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:53.823853  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:53.823928  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:53.905075  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:53.905118  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:53.955424  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:53.955468  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:54.014871  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:54.014916  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:56.537142  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:56.550409  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:56.550478  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:56.587148  585602 cri.go:89] found id: ""
	I1205 20:33:56.587174  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.587184  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:56.587190  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:56.587249  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:56.625153  585602 cri.go:89] found id: ""
	I1205 20:33:56.625180  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.625188  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:56.625193  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:56.625243  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:56.671545  585602 cri.go:89] found id: ""
	I1205 20:33:56.671573  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.671582  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:56.671589  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:56.671652  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:56.712760  585602 cri.go:89] found id: ""
	I1205 20:33:56.712797  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.712810  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:56.712818  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:56.712890  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:56.751219  585602 cri.go:89] found id: ""
	I1205 20:33:56.751254  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.751266  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:56.751274  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:56.751340  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:56.787946  585602 cri.go:89] found id: ""
	I1205 20:33:56.787985  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.787998  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:56.788007  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:56.788101  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:56.823057  585602 cri.go:89] found id: ""
	I1205 20:33:56.823095  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.823108  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:56.823114  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:56.823170  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:54.164074  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:56.165063  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:56.043193  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:58.044158  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:00.542798  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:56.624407  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:59.119743  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:56.860358  585602 cri.go:89] found id: ""
	I1205 20:33:56.860396  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.860408  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:56.860421  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:56.860438  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:56.912954  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:56.912996  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:56.927642  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:56.927691  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:57.007316  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:57.007344  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:57.007359  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:57.091471  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:57.091522  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:59.642150  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:59.656240  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:59.656324  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:59.695918  585602 cri.go:89] found id: ""
	I1205 20:33:59.695954  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.695965  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:59.695973  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:59.696037  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:59.744218  585602 cri.go:89] found id: ""
	I1205 20:33:59.744250  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.744260  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:59.744278  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:59.744340  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:59.799035  585602 cri.go:89] found id: ""
	I1205 20:33:59.799081  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.799094  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:59.799102  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:59.799172  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:59.850464  585602 cri.go:89] found id: ""
	I1205 20:33:59.850505  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.850517  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:59.850526  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:59.850590  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:59.886441  585602 cri.go:89] found id: ""
	I1205 20:33:59.886477  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.886489  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:59.886497  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:59.886564  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:59.926689  585602 cri.go:89] found id: ""
	I1205 20:33:59.926728  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.926741  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:59.926751  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:59.926821  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:59.962615  585602 cri.go:89] found id: ""
	I1205 20:33:59.962644  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.962653  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:59.962659  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:59.962716  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:00.001852  585602 cri.go:89] found id: ""
	I1205 20:34:00.001878  585602 logs.go:282] 0 containers: []
	W1205 20:34:00.001886  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:00.001897  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:00.001913  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:00.055465  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:00.055508  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:00.071904  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:00.071941  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:00.151225  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:00.151248  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:00.151262  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:00.233869  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:00.233914  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:58.664773  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:00.664948  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:02.543019  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:04.543810  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:01.120136  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:03.120824  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:05.620283  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:02.776751  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:02.790868  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:02.790945  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:02.834686  585602 cri.go:89] found id: ""
	I1205 20:34:02.834719  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.834731  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:02.834740  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:02.834823  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:02.871280  585602 cri.go:89] found id: ""
	I1205 20:34:02.871313  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.871333  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:02.871342  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:02.871413  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:02.907300  585602 cri.go:89] found id: ""
	I1205 20:34:02.907336  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.907346  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:02.907352  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:02.907406  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:02.945453  585602 cri.go:89] found id: ""
	I1205 20:34:02.945487  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.945499  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:02.945511  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:02.945587  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:02.980528  585602 cri.go:89] found id: ""
	I1205 20:34:02.980561  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.980573  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:02.980580  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:02.980653  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:03.016919  585602 cri.go:89] found id: ""
	I1205 20:34:03.016946  585602 logs.go:282] 0 containers: []
	W1205 20:34:03.016955  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:03.016961  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:03.017012  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:03.053541  585602 cri.go:89] found id: ""
	I1205 20:34:03.053575  585602 logs.go:282] 0 containers: []
	W1205 20:34:03.053588  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:03.053596  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:03.053655  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:03.089907  585602 cri.go:89] found id: ""
	I1205 20:34:03.089946  585602 logs.go:282] 0 containers: []
	W1205 20:34:03.089959  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:03.089974  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:03.089991  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:03.144663  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:03.144700  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:03.160101  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:03.160140  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:03.231559  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:03.231583  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:03.231600  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:03.313226  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:03.313271  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:05.855538  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:05.869019  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:05.869120  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:05.906879  585602 cri.go:89] found id: ""
	I1205 20:34:05.906910  585602 logs.go:282] 0 containers: []
	W1205 20:34:05.906921  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:05.906928  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:05.906994  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:05.946846  585602 cri.go:89] found id: ""
	I1205 20:34:05.946881  585602 logs.go:282] 0 containers: []
	W1205 20:34:05.946893  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:05.946900  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:05.946968  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:05.984067  585602 cri.go:89] found id: ""
	I1205 20:34:05.984104  585602 logs.go:282] 0 containers: []
	W1205 20:34:05.984118  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:05.984127  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:05.984193  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:06.024984  585602 cri.go:89] found id: ""
	I1205 20:34:06.025014  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.025023  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:06.025029  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:06.025091  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:06.064766  585602 cri.go:89] found id: ""
	I1205 20:34:06.064794  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.064806  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:06.064821  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:06.064877  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:06.105652  585602 cri.go:89] found id: ""
	I1205 20:34:06.105683  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.105691  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:06.105698  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:06.105748  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:06.143732  585602 cri.go:89] found id: ""
	I1205 20:34:06.143762  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.143773  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:06.143781  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:06.143857  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:06.183397  585602 cri.go:89] found id: ""
	I1205 20:34:06.183429  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.183439  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:06.183449  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:06.183462  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:06.236403  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:06.236449  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:06.250728  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:06.250759  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:06.320983  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:06.321009  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:06.321025  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:06.408037  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:06.408084  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:03.164354  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:05.665345  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:07.044218  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:09.543580  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:08.119532  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:10.119918  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:08.955959  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:08.968956  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:08.969037  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:09.002804  585602 cri.go:89] found id: ""
	I1205 20:34:09.002846  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.002859  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:09.002866  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:09.002935  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:09.039098  585602 cri.go:89] found id: ""
	I1205 20:34:09.039191  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.039210  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:09.039220  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:09.039291  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:09.074727  585602 cri.go:89] found id: ""
	I1205 20:34:09.074764  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.074776  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:09.074792  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:09.074861  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:09.112650  585602 cri.go:89] found id: ""
	I1205 20:34:09.112682  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.112692  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:09.112698  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:09.112754  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:09.149301  585602 cri.go:89] found id: ""
	I1205 20:34:09.149346  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.149359  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:09.149368  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:09.149432  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:09.190288  585602 cri.go:89] found id: ""
	I1205 20:34:09.190317  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.190329  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:09.190338  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:09.190404  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:09.225311  585602 cri.go:89] found id: ""
	I1205 20:34:09.225348  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.225361  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:09.225369  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:09.225435  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:09.261023  585602 cri.go:89] found id: ""
	I1205 20:34:09.261052  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.261063  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:09.261075  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:09.261092  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:09.313733  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:09.313785  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:09.329567  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:09.329619  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:09.403397  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:09.403430  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:09.403447  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:09.486586  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:09.486630  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:08.163730  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:10.663603  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:12.665663  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:11.544538  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:14.042854  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:12.120629  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:14.621977  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:12.028110  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:12.041802  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:12.041866  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:12.080349  585602 cri.go:89] found id: ""
	I1205 20:34:12.080388  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.080402  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:12.080410  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:12.080475  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:12.121455  585602 cri.go:89] found id: ""
	I1205 20:34:12.121486  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.121499  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:12.121507  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:12.121567  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:12.157743  585602 cri.go:89] found id: ""
	I1205 20:34:12.157768  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.157785  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:12.157794  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:12.157855  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:12.196901  585602 cri.go:89] found id: ""
	I1205 20:34:12.196933  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.196946  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:12.196954  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:12.197024  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:12.234471  585602 cri.go:89] found id: ""
	I1205 20:34:12.234500  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.234508  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:12.234516  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:12.234585  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:12.269238  585602 cri.go:89] found id: ""
	I1205 20:34:12.269263  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.269271  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:12.269278  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:12.269340  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:12.307965  585602 cri.go:89] found id: ""
	I1205 20:34:12.308006  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.308016  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:12.308022  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:12.308081  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:12.343463  585602 cri.go:89] found id: ""
	I1205 20:34:12.343497  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.343510  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:12.343536  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:12.343574  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:12.393393  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:12.393437  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:12.407991  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:12.408025  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:12.477868  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:12.477910  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:12.477924  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:12.557274  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:12.557315  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:15.102587  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:15.115734  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:15.115808  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:15.153057  585602 cri.go:89] found id: ""
	I1205 20:34:15.153091  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.153105  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:15.153113  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:15.153182  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:15.192762  585602 cri.go:89] found id: ""
	I1205 20:34:15.192815  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.192825  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:15.192831  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:15.192887  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:15.231330  585602 cri.go:89] found id: ""
	I1205 20:34:15.231364  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.231374  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:15.231380  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:15.231435  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:15.265229  585602 cri.go:89] found id: ""
	I1205 20:34:15.265262  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.265271  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:15.265278  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:15.265350  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:15.299596  585602 cri.go:89] found id: ""
	I1205 20:34:15.299624  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.299634  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:15.299640  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:15.299699  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:15.336155  585602 cri.go:89] found id: ""
	I1205 20:34:15.336187  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.336195  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:15.336202  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:15.336256  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:15.371867  585602 cri.go:89] found id: ""
	I1205 20:34:15.371899  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.371909  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:15.371920  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:15.371976  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:15.408536  585602 cri.go:89] found id: ""
	I1205 20:34:15.408566  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.408580  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:15.408592  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:15.408609  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:15.422499  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:15.422538  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:15.495096  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:15.495131  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:15.495145  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:15.571411  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:15.571461  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:15.612284  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:15.612319  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:15.165343  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:17.165619  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:16.043962  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:18.542495  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:17.119936  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:19.622046  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:18.168869  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:18.184247  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:18.184370  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:18.226078  585602 cri.go:89] found id: ""
	I1205 20:34:18.226112  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.226124  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:18.226133  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:18.226202  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:18.266221  585602 cri.go:89] found id: ""
	I1205 20:34:18.266258  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.266270  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:18.266278  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:18.266349  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:18.305876  585602 cri.go:89] found id: ""
	I1205 20:34:18.305903  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.305912  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:18.305921  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:18.305971  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:18.342044  585602 cri.go:89] found id: ""
	I1205 20:34:18.342077  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.342089  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:18.342098  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:18.342160  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:18.380240  585602 cri.go:89] found id: ""
	I1205 20:34:18.380290  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.380301  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:18.380310  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:18.380372  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:18.416228  585602 cri.go:89] found id: ""
	I1205 20:34:18.416258  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.416301  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:18.416311  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:18.416380  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:18.453368  585602 cri.go:89] found id: ""
	I1205 20:34:18.453407  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.453420  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:18.453429  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:18.453513  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:18.491689  585602 cri.go:89] found id: ""
	I1205 20:34:18.491727  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.491739  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:18.491754  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:18.491779  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:18.546614  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:18.546652  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:18.560516  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:18.560547  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:18.637544  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:18.637568  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:18.637582  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:18.720410  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:18.720453  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:21.261494  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:21.276378  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:21.276473  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:21.317571  585602 cri.go:89] found id: ""
	I1205 20:34:21.317602  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.317610  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:21.317617  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:21.317670  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:21.355174  585602 cri.go:89] found id: ""
	I1205 20:34:21.355202  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.355210  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:21.355217  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:21.355277  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:21.393259  585602 cri.go:89] found id: ""
	I1205 20:34:21.393297  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.393310  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:21.393317  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:21.393408  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:21.432286  585602 cri.go:89] found id: ""
	I1205 20:34:21.432329  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.432341  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:21.432348  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:21.432415  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:21.469844  585602 cri.go:89] found id: ""
	I1205 20:34:21.469877  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.469888  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:21.469896  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:21.469964  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:21.508467  585602 cri.go:89] found id: ""
	I1205 20:34:21.508507  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.508519  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:21.508528  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:21.508592  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:21.553053  585602 cri.go:89] found id: ""
	I1205 20:34:21.553185  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.553208  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:21.553226  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:21.553317  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:21.590595  585602 cri.go:89] found id: ""
	I1205 20:34:21.590629  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.590640  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:21.590654  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:21.590672  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:21.649493  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:21.649546  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:21.666114  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:21.666147  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:21.742801  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:21.742828  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:21.742858  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:21.822949  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:21.823010  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:19.165951  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:21.664450  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:21.043233  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:23.043477  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:25.543490  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:22.119177  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:24.119685  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:24.366575  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:24.380894  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:24.380992  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:24.416907  585602 cri.go:89] found id: ""
	I1205 20:34:24.416943  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.416956  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:24.416965  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:24.417034  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:24.453303  585602 cri.go:89] found id: ""
	I1205 20:34:24.453337  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.453349  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:24.453358  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:24.453445  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:24.496795  585602 cri.go:89] found id: ""
	I1205 20:34:24.496825  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.496833  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:24.496839  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:24.496907  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:24.539105  585602 cri.go:89] found id: ""
	I1205 20:34:24.539142  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.539154  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:24.539162  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:24.539230  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:24.576778  585602 cri.go:89] found id: ""
	I1205 20:34:24.576808  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.576816  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:24.576822  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:24.576879  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:24.617240  585602 cri.go:89] found id: ""
	I1205 20:34:24.617271  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.617280  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:24.617293  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:24.617374  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:24.659274  585602 cri.go:89] found id: ""
	I1205 20:34:24.659316  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.659330  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:24.659342  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:24.659408  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:24.701047  585602 cri.go:89] found id: ""
	I1205 20:34:24.701092  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.701105  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:24.701121  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:24.701139  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:24.741070  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:24.741115  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:24.793364  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:24.793407  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:24.807803  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:24.807839  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:24.883194  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:24.883225  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:24.883243  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:24.163198  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:26.165402  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:27.544607  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:30.044244  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:26.619847  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:28.621467  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:30.621704  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:27.467460  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:27.483055  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:27.483129  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:27.523718  585602 cri.go:89] found id: ""
	I1205 20:34:27.523752  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.523763  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:27.523772  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:27.523841  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:27.562872  585602 cri.go:89] found id: ""
	I1205 20:34:27.562899  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.562908  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:27.562915  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:27.562976  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:27.601804  585602 cri.go:89] found id: ""
	I1205 20:34:27.601835  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.601845  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:27.601852  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:27.601916  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:27.640553  585602 cri.go:89] found id: ""
	I1205 20:34:27.640589  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.640599  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:27.640605  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:27.640672  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:27.680983  585602 cri.go:89] found id: ""
	I1205 20:34:27.681015  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.681027  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:27.681035  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:27.681105  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:27.720766  585602 cri.go:89] found id: ""
	I1205 20:34:27.720811  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.720821  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:27.720828  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:27.720886  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:27.761422  585602 cri.go:89] found id: ""
	I1205 20:34:27.761453  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.761466  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:27.761480  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:27.761550  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:27.799658  585602 cri.go:89] found id: ""
	I1205 20:34:27.799692  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.799705  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:27.799720  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:27.799736  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:27.851801  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:27.851845  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:27.865953  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:27.865984  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:27.941787  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:27.941824  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:27.941840  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:28.023556  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:28.023616  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:30.573267  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:30.586591  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:30.586679  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:30.629923  585602 cri.go:89] found id: ""
	I1205 20:34:30.629960  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.629974  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:30.629982  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:30.630048  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:30.667045  585602 cri.go:89] found id: ""
	I1205 20:34:30.667078  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.667090  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:30.667098  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:30.667167  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:30.704479  585602 cri.go:89] found id: ""
	I1205 20:34:30.704510  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.704522  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:30.704530  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:30.704620  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:30.746035  585602 cri.go:89] found id: ""
	I1205 20:34:30.746065  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.746077  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:30.746085  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:30.746161  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:30.784375  585602 cri.go:89] found id: ""
	I1205 20:34:30.784415  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.784425  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:30.784431  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:30.784487  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:30.821779  585602 cri.go:89] found id: ""
	I1205 20:34:30.821811  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.821822  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:30.821831  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:30.821905  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:30.856927  585602 cri.go:89] found id: ""
	I1205 20:34:30.856963  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.856976  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:30.856984  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:30.857088  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:30.895852  585602 cri.go:89] found id: ""
	I1205 20:34:30.895882  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.895894  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:30.895914  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:30.895930  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:30.947600  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:30.947642  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:30.962717  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:30.962753  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:31.049225  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:31.049262  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:31.049280  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:31.126806  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:31.126850  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:28.665006  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:31.164172  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:32.548634  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:35.042159  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:33.120370  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:35.621247  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:33.670844  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:33.685063  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:33.685160  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:33.718277  585602 cri.go:89] found id: ""
	I1205 20:34:33.718312  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.718321  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:33.718327  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:33.718378  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:33.755409  585602 cri.go:89] found id: ""
	I1205 20:34:33.755445  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.755456  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:33.755465  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:33.755542  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:33.809447  585602 cri.go:89] found id: ""
	I1205 20:34:33.809506  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.809519  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:33.809527  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:33.809599  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:33.848327  585602 cri.go:89] found id: ""
	I1205 20:34:33.848362  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.848376  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:33.848384  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:33.848444  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:33.887045  585602 cri.go:89] found id: ""
	I1205 20:34:33.887082  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.887094  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:33.887103  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:33.887178  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:33.924385  585602 cri.go:89] found id: ""
	I1205 20:34:33.924418  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.924427  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:33.924434  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:33.924499  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:33.960711  585602 cri.go:89] found id: ""
	I1205 20:34:33.960738  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.960747  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:33.960757  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:33.960808  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:33.998150  585602 cri.go:89] found id: ""
	I1205 20:34:33.998184  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.998193  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:33.998203  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:33.998215  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:34.041977  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:34.042006  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:34.095895  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:34.095940  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:34.109802  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:34.109836  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:34.185716  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:34.185740  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:34.185753  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:36.767768  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:36.782114  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:36.782201  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:36.820606  585602 cri.go:89] found id: ""
	I1205 20:34:36.820647  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.820659  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:36.820668  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:36.820736  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:33.164572  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:35.664069  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:37.043102  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:39.544667  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:38.120555  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:40.619948  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:36.858999  585602 cri.go:89] found id: ""
	I1205 20:34:36.859033  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.859044  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:36.859051  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:36.859117  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:36.896222  585602 cri.go:89] found id: ""
	I1205 20:34:36.896257  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.896282  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:36.896290  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:36.896352  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:36.935565  585602 cri.go:89] found id: ""
	I1205 20:34:36.935602  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.935612  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:36.935618  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:36.935671  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:36.974031  585602 cri.go:89] found id: ""
	I1205 20:34:36.974066  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.974079  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:36.974096  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:36.974166  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:37.018243  585602 cri.go:89] found id: ""
	I1205 20:34:37.018278  585602 logs.go:282] 0 containers: []
	W1205 20:34:37.018290  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:37.018300  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:37.018371  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:37.057715  585602 cri.go:89] found id: ""
	I1205 20:34:37.057742  585602 logs.go:282] 0 containers: []
	W1205 20:34:37.057750  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:37.057756  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:37.057806  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:37.099006  585602 cri.go:89] found id: ""
	I1205 20:34:37.099037  585602 logs.go:282] 0 containers: []
	W1205 20:34:37.099045  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:37.099055  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:37.099070  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:37.186218  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:37.186264  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:37.232921  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:37.232955  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:37.285539  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:37.285581  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:37.301115  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:37.301155  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:37.373249  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:39.873692  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:39.887772  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:39.887847  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:39.925558  585602 cri.go:89] found id: ""
	I1205 20:34:39.925595  585602 logs.go:282] 0 containers: []
	W1205 20:34:39.925607  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:39.925615  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:39.925684  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:39.964967  585602 cri.go:89] found id: ""
	I1205 20:34:39.964994  585602 logs.go:282] 0 containers: []
	W1205 20:34:39.965004  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:39.965011  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:39.965073  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:40.010875  585602 cri.go:89] found id: ""
	I1205 20:34:40.010911  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.010923  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:40.010930  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:40.011003  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:40.050940  585602 cri.go:89] found id: ""
	I1205 20:34:40.050970  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.050981  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:40.050990  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:40.051052  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:40.086157  585602 cri.go:89] found id: ""
	I1205 20:34:40.086197  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.086210  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:40.086219  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:40.086283  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:40.123280  585602 cri.go:89] found id: ""
	I1205 20:34:40.123321  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.123333  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:40.123344  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:40.123414  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:40.164755  585602 cri.go:89] found id: ""
	I1205 20:34:40.164784  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.164793  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:40.164800  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:40.164871  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:40.211566  585602 cri.go:89] found id: ""
	I1205 20:34:40.211595  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.211608  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:40.211621  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:40.211638  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:40.275269  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:40.275326  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:40.303724  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:40.303754  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:40.377315  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:40.377345  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:40.377360  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:40.457744  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:40.457794  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:38.163598  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:40.164173  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:42.663952  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:42.043947  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:44.542445  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:42.621824  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:45.120127  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:43.000390  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:43.015220  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:43.015308  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:43.051919  585602 cri.go:89] found id: ""
	I1205 20:34:43.051946  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.051955  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:43.051961  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:43.052034  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:43.088188  585602 cri.go:89] found id: ""
	I1205 20:34:43.088230  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.088241  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:43.088249  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:43.088350  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:43.125881  585602 cri.go:89] found id: ""
	I1205 20:34:43.125910  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.125922  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:43.125930  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:43.125988  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:43.166630  585602 cri.go:89] found id: ""
	I1205 20:34:43.166657  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.166674  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:43.166682  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:43.166744  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:43.206761  585602 cri.go:89] found id: ""
	I1205 20:34:43.206791  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.206803  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:43.206810  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:43.206873  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:43.242989  585602 cri.go:89] found id: ""
	I1205 20:34:43.243017  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.243026  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:43.243033  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:43.243094  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:43.281179  585602 cri.go:89] found id: ""
	I1205 20:34:43.281208  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.281217  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:43.281223  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:43.281272  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:43.317283  585602 cri.go:89] found id: ""
	I1205 20:34:43.317314  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.317326  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:43.317347  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:43.317362  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:43.369262  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:43.369303  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:43.386137  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:43.386182  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:43.458532  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:43.458553  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:43.458566  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:43.538254  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:43.538296  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:46.083593  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:46.101024  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:46.101133  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:46.169786  585602 cri.go:89] found id: ""
	I1205 20:34:46.169817  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.169829  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:46.169838  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:46.169905  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:46.218647  585602 cri.go:89] found id: ""
	I1205 20:34:46.218689  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.218704  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:46.218713  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:46.218790  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:46.262718  585602 cri.go:89] found id: ""
	I1205 20:34:46.262749  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.262758  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:46.262764  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:46.262846  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:46.301606  585602 cri.go:89] found id: ""
	I1205 20:34:46.301638  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.301649  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:46.301656  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:46.301714  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:46.337313  585602 cri.go:89] found id: ""
	I1205 20:34:46.337347  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.337356  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:46.337362  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:46.337422  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:46.380171  585602 cri.go:89] found id: ""
	I1205 20:34:46.380201  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.380209  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:46.380215  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:46.380288  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:46.423054  585602 cri.go:89] found id: ""
	I1205 20:34:46.423089  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.423101  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:46.423109  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:46.423178  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:46.467615  585602 cri.go:89] found id: ""
	I1205 20:34:46.467647  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.467659  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:46.467673  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:46.467687  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:46.522529  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:46.522579  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:46.537146  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:46.537199  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:46.609585  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:46.609618  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:46.609637  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:46.696093  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:46.696152  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:45.164249  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:47.664159  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:46.547883  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:49.043793  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:47.623375  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:50.122680  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:49.238735  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:49.256406  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:49.256484  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:49.294416  585602 cri.go:89] found id: ""
	I1205 20:34:49.294449  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.294458  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:49.294467  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:49.294528  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:49.334235  585602 cri.go:89] found id: ""
	I1205 20:34:49.334268  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.334282  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:49.334290  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:49.334362  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:49.372560  585602 cri.go:89] found id: ""
	I1205 20:34:49.372637  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.372662  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:49.372674  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:49.372756  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:49.413779  585602 cri.go:89] found id: ""
	I1205 20:34:49.413813  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.413822  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:49.413829  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:49.413900  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:49.449513  585602 cri.go:89] found id: ""
	I1205 20:34:49.449543  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.449553  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:49.449560  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:49.449630  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:49.488923  585602 cri.go:89] found id: ""
	I1205 20:34:49.488961  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.488973  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:49.488982  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:49.489050  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:49.524922  585602 cri.go:89] found id: ""
	I1205 20:34:49.524959  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.524971  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:49.524980  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:49.525048  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:49.565700  585602 cri.go:89] found id: ""
	I1205 20:34:49.565735  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.565745  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:49.565756  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:49.565769  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:49.624297  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:49.624339  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:49.641424  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:49.641465  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:49.721474  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:49.721504  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:49.721517  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:49.810777  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:49.810822  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:49.664998  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:52.163337  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:51.543015  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:54.045218  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:52.621649  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:55.120035  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:52.354661  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:52.368481  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:52.368555  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:52.407081  585602 cri.go:89] found id: ""
	I1205 20:34:52.407110  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.407118  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:52.407125  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:52.407189  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:52.444462  585602 cri.go:89] found id: ""
	I1205 20:34:52.444489  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.444498  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:52.444505  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:52.444562  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:52.483546  585602 cri.go:89] found id: ""
	I1205 20:34:52.483573  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.483582  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:52.483595  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:52.483648  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:52.526529  585602 cri.go:89] found id: ""
	I1205 20:34:52.526567  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.526579  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:52.526587  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:52.526655  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:52.564875  585602 cri.go:89] found id: ""
	I1205 20:34:52.564904  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.564913  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:52.564919  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:52.564984  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:52.599367  585602 cri.go:89] found id: ""
	I1205 20:34:52.599397  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.599410  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:52.599419  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:52.599475  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:52.638192  585602 cri.go:89] found id: ""
	I1205 20:34:52.638233  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.638247  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:52.638255  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:52.638336  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:52.675227  585602 cri.go:89] found id: ""
	I1205 20:34:52.675264  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.675275  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:52.675287  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:52.675311  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:52.716538  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:52.716582  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:52.772121  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:52.772162  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:52.787598  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:52.787632  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:52.865380  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:52.865408  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:52.865422  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:55.449288  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:55.462386  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:55.462474  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:55.498350  585602 cri.go:89] found id: ""
	I1205 20:34:55.498382  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.498391  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:55.498397  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:55.498457  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:55.540878  585602 cri.go:89] found id: ""
	I1205 20:34:55.540915  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.540929  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:55.540939  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:55.541022  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:55.577248  585602 cri.go:89] found id: ""
	I1205 20:34:55.577277  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.577288  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:55.577294  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:55.577375  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:55.615258  585602 cri.go:89] found id: ""
	I1205 20:34:55.615287  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.615308  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:55.615316  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:55.615384  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:55.652102  585602 cri.go:89] found id: ""
	I1205 20:34:55.652136  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.652147  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:55.652157  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:55.652228  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:55.689353  585602 cri.go:89] found id: ""
	I1205 20:34:55.689387  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.689399  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:55.689408  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:55.689486  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:55.727603  585602 cri.go:89] found id: ""
	I1205 20:34:55.727634  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.727648  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:55.727657  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:55.727729  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:55.765103  585602 cri.go:89] found id: ""
	I1205 20:34:55.765134  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.765143  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:55.765156  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:55.765169  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:55.823878  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:55.823923  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:55.838966  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:55.839001  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:55.909385  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:55.909412  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:55.909424  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:55.992036  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:55.992080  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:54.165488  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:56.166030  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:56.542663  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:58.543260  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:57.120140  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:59.621190  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:58.537231  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:58.552307  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:58.552392  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:58.589150  585602 cri.go:89] found id: ""
	I1205 20:34:58.589184  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.589200  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:58.589206  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:58.589272  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:58.630344  585602 cri.go:89] found id: ""
	I1205 20:34:58.630370  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.630378  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:58.630385  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:58.630452  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:58.669953  585602 cri.go:89] found id: ""
	I1205 20:34:58.669981  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.669991  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:58.669999  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:58.670055  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:58.708532  585602 cri.go:89] found id: ""
	I1205 20:34:58.708562  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.708570  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:58.708577  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:58.708631  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:58.745944  585602 cri.go:89] found id: ""
	I1205 20:34:58.745975  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.745986  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:58.745994  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:58.746051  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:58.787177  585602 cri.go:89] found id: ""
	I1205 20:34:58.787206  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.787214  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:58.787221  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:58.787272  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:58.822084  585602 cri.go:89] found id: ""
	I1205 20:34:58.822123  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.822134  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:58.822142  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:58.822210  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:58.858608  585602 cri.go:89] found id: ""
	I1205 20:34:58.858645  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.858657  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:58.858670  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:58.858691  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:58.873289  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:58.873322  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:58.947855  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:58.947884  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:58.947900  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:59.028348  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:59.028397  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:59.069172  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:59.069206  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:01.623309  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:01.637362  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:01.637449  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:01.678867  585602 cri.go:89] found id: ""
	I1205 20:35:01.678907  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.678919  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:01.678928  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:01.679001  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:01.715333  585602 cri.go:89] found id: ""
	I1205 20:35:01.715364  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.715372  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:01.715379  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:01.715439  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:01.754247  585602 cri.go:89] found id: ""
	I1205 20:35:01.754277  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.754286  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:01.754292  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:01.754348  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:01.791922  585602 cri.go:89] found id: ""
	I1205 20:35:01.791957  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.791968  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:01.791977  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:01.792045  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:01.827261  585602 cri.go:89] found id: ""
	I1205 20:35:01.827294  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.827307  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:01.827315  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:01.827389  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:58.665248  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:01.163431  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:01.043056  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:03.543015  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:02.122540  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:04.620544  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:01.864205  585602 cri.go:89] found id: ""
	I1205 20:35:01.864234  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.864243  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:01.864249  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:01.864332  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:01.902740  585602 cri.go:89] found id: ""
	I1205 20:35:01.902773  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.902783  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:01.902789  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:01.902857  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:01.941627  585602 cri.go:89] found id: ""
	I1205 20:35:01.941657  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.941666  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:01.941677  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:01.941690  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:01.995743  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:01.995791  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:02.010327  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:02.010368  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:02.086879  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:02.086907  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:02.086921  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:02.166500  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:02.166538  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:04.716638  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:04.730922  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:04.730992  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:04.768492  585602 cri.go:89] found id: ""
	I1205 20:35:04.768524  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.768534  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:04.768540  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:04.768606  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:04.803740  585602 cri.go:89] found id: ""
	I1205 20:35:04.803776  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.803789  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:04.803797  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:04.803866  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:04.840907  585602 cri.go:89] found id: ""
	I1205 20:35:04.840947  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.840960  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:04.840968  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:04.841036  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:04.875901  585602 cri.go:89] found id: ""
	I1205 20:35:04.875933  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.875943  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:04.875949  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:04.876003  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:04.913581  585602 cri.go:89] found id: ""
	I1205 20:35:04.913617  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.913627  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:04.913634  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:04.913689  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:04.952460  585602 cri.go:89] found id: ""
	I1205 20:35:04.952504  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.952519  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:04.952528  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:04.952617  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:04.989939  585602 cri.go:89] found id: ""
	I1205 20:35:04.989968  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.989979  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:04.989985  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:04.990041  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:05.025017  585602 cri.go:89] found id: ""
	I1205 20:35:05.025052  585602 logs.go:282] 0 containers: []
	W1205 20:35:05.025066  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:05.025078  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:05.025094  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:05.068179  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:05.068223  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:05.127311  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:05.127369  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:05.141092  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:05.141129  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:05.217648  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:05.217678  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:05.217691  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:03.163987  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:05.164131  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:07.165804  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:06.043765  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:08.036400  585113 pod_ready.go:82] duration metric: took 4m0.000157493s for pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace to be "Ready" ...
	E1205 20:35:08.036457  585113 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace to be "Ready" (will not retry!)
	I1205 20:35:08.036489  585113 pod_ready.go:39] duration metric: took 4m11.05050249s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:35:08.036554  585113 kubeadm.go:597] duration metric: took 4m18.178903617s to restartPrimaryControlPlane
	W1205 20:35:08.036733  585113 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 20:35:08.036784  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:35:06.621887  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:09.119692  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:07.793457  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:07.808710  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:07.808778  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:07.846331  585602 cri.go:89] found id: ""
	I1205 20:35:07.846366  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.846380  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:07.846389  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:07.846462  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:07.881185  585602 cri.go:89] found id: ""
	I1205 20:35:07.881222  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.881236  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:07.881243  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:07.881307  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:07.918463  585602 cri.go:89] found id: ""
	I1205 20:35:07.918501  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.918514  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:07.918522  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:07.918589  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:07.956329  585602 cri.go:89] found id: ""
	I1205 20:35:07.956364  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.956375  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:07.956385  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:07.956456  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:07.992173  585602 cri.go:89] found id: ""
	I1205 20:35:07.992212  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.992222  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:07.992229  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:07.992318  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:08.030183  585602 cri.go:89] found id: ""
	I1205 20:35:08.030214  585602 logs.go:282] 0 containers: []
	W1205 20:35:08.030226  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:08.030235  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:08.030309  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:08.072320  585602 cri.go:89] found id: ""
	I1205 20:35:08.072362  585602 logs.go:282] 0 containers: []
	W1205 20:35:08.072374  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:08.072382  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:08.072452  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:08.124220  585602 cri.go:89] found id: ""
	I1205 20:35:08.124253  585602 logs.go:282] 0 containers: []
	W1205 20:35:08.124277  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:08.124292  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:08.124310  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:08.171023  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:08.171057  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:08.237645  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:08.237699  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:08.252708  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:08.252744  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:08.343107  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:08.343140  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:08.343158  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:10.919646  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:10.934494  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:10.934562  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:10.971816  585602 cri.go:89] found id: ""
	I1205 20:35:10.971855  585602 logs.go:282] 0 containers: []
	W1205 20:35:10.971868  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:10.971878  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:10.971950  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:11.010031  585602 cri.go:89] found id: ""
	I1205 20:35:11.010071  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.010084  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:11.010095  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:11.010170  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:11.046520  585602 cri.go:89] found id: ""
	I1205 20:35:11.046552  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.046561  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:11.046568  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:11.046632  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:11.081385  585602 cri.go:89] found id: ""
	I1205 20:35:11.081426  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.081440  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:11.081448  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:11.081522  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:11.122529  585602 cri.go:89] found id: ""
	I1205 20:35:11.122559  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.122568  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:11.122576  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:11.122656  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:11.161684  585602 cri.go:89] found id: ""
	I1205 20:35:11.161767  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.161788  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:11.161797  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:11.161862  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:11.199796  585602 cri.go:89] found id: ""
	I1205 20:35:11.199824  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.199833  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:11.199842  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:11.199916  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:11.235580  585602 cri.go:89] found id: ""
	I1205 20:35:11.235617  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.235625  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:11.235635  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:11.235647  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:11.291005  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:11.291055  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:11.305902  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:11.305947  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:11.375862  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:11.375894  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:11.375915  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:11.456701  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:11.456746  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:09.663952  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:11.664200  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:11.119954  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:13.120903  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:15.622247  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:14.006509  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:14.020437  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:14.020531  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:14.056878  585602 cri.go:89] found id: ""
	I1205 20:35:14.056905  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.056915  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:14.056923  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:14.056993  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:14.091747  585602 cri.go:89] found id: ""
	I1205 20:35:14.091782  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.091792  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:14.091800  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:14.091860  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:14.131409  585602 cri.go:89] found id: ""
	I1205 20:35:14.131440  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.131453  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:14.131461  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:14.131532  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:14.170726  585602 cri.go:89] found id: ""
	I1205 20:35:14.170754  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.170765  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:14.170773  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:14.170851  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:14.208619  585602 cri.go:89] found id: ""
	I1205 20:35:14.208654  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.208666  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:14.208674  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:14.208747  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:14.247734  585602 cri.go:89] found id: ""
	I1205 20:35:14.247771  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.247784  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:14.247793  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:14.247855  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:14.296090  585602 cri.go:89] found id: ""
	I1205 20:35:14.296119  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.296129  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:14.296136  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:14.296205  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:14.331009  585602 cri.go:89] found id: ""
	I1205 20:35:14.331037  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.331045  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:14.331057  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:14.331070  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:14.384877  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:14.384935  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:14.400458  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:14.400507  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:14.475745  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:14.475774  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:14.475787  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:14.553150  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:14.553192  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:14.164516  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:16.165316  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:18.119418  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:20.120499  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:17.095700  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:17.109135  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:17.109215  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:17.146805  585602 cri.go:89] found id: ""
	I1205 20:35:17.146838  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.146851  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:17.146861  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:17.146919  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:17.186861  585602 cri.go:89] found id: ""
	I1205 20:35:17.186891  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.186901  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:17.186907  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:17.186960  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:17.223113  585602 cri.go:89] found id: ""
	I1205 20:35:17.223148  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.223159  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:17.223166  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:17.223238  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:17.263066  585602 cri.go:89] found id: ""
	I1205 20:35:17.263098  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.263110  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:17.263118  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:17.263187  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:17.300113  585602 cri.go:89] found id: ""
	I1205 20:35:17.300153  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.300167  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:17.300175  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:17.300237  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:17.339135  585602 cri.go:89] found id: ""
	I1205 20:35:17.339172  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.339184  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:17.339193  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:17.339260  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:17.376200  585602 cri.go:89] found id: ""
	I1205 20:35:17.376229  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.376239  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:17.376248  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:17.376354  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:17.411852  585602 cri.go:89] found id: ""
	I1205 20:35:17.411895  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.411906  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:17.411919  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:17.411948  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:17.463690  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:17.463729  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:17.478912  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:17.478946  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:17.552874  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:17.552907  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:17.552933  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:17.633621  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:17.633667  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:20.175664  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:20.191495  585602 kubeadm.go:597] duration metric: took 4m4.568774806s to restartPrimaryControlPlane
	W1205 20:35:20.191570  585602 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 20:35:20.191594  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:35:20.660014  585602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:35:20.676684  585602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:35:20.688338  585602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:35:20.699748  585602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:35:20.699770  585602 kubeadm.go:157] found existing configuration files:
	
	I1205 20:35:20.699822  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:35:20.710417  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:35:20.710497  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:35:20.722295  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:35:20.732854  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:35:20.732933  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:35:20.744242  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:35:20.754593  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:35:20.754671  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:35:20.766443  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:35:20.777087  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:35:20.777157  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:35:20.788406  585602 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:35:20.869602  585602 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 20:35:20.869778  585602 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:35:21.022417  585602 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:35:21.022558  585602 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:35:21.022715  585602 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:35:21.213817  585602 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:35:21.216995  585602 out.go:235]   - Generating certificates and keys ...
	I1205 20:35:21.217146  585602 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:35:21.217240  585602 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:35:21.217373  585602 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:35:21.217502  585602 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:35:21.217614  585602 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:35:21.217699  585602 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 20:35:21.217784  585602 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:35:21.217876  585602 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:35:21.217985  585602 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:35:21.218129  585602 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:35:21.218186  585602 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 20:35:21.218289  585602 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:35:21.337924  585602 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:35:21.464355  585602 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:35:21.709734  585602 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:35:21.837040  585602 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:35:21.860767  585602 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:35:21.860894  585602 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:35:21.860934  585602 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:35:22.002564  585602 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:35:18.663978  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:20.665113  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:22.622593  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:25.120101  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:22.004407  585602 out.go:235]   - Booting up control plane ...
	I1205 20:35:22.004560  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:35:22.009319  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:35:22.010412  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:35:22.019041  585602 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:35:22.021855  585602 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:35:23.163493  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:25.164833  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:27.164914  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:27.619140  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:29.622476  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:29.664525  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:32.163413  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:34.411201  585113 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.37438104s)
	I1205 20:35:34.411295  585113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:35:34.428580  585113 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:35:34.439233  585113 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:35:34.450165  585113 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:35:34.450192  585113 kubeadm.go:157] found existing configuration files:
	
	I1205 20:35:34.450255  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:35:34.461910  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:35:34.461985  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:35:34.473936  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:35:34.484160  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:35:34.484240  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:35:34.495772  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:35:34.507681  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:35:34.507757  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:35:34.519932  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:35:34.532111  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:35:34.532190  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:35:34.543360  585113 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:35:34.594095  585113 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 20:35:34.594214  585113 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:35:34.712502  585113 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:35:34.712685  585113 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:35:34.712818  585113 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 20:35:34.729419  585113 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:35:34.731281  585113 out.go:235]   - Generating certificates and keys ...
	I1205 20:35:34.731395  585113 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:35:34.731486  585113 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:35:34.731614  585113 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:35:34.731715  585113 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:35:34.731812  585113 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:35:34.731902  585113 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 20:35:34.731994  585113 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:35:34.732082  585113 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:35:34.732179  585113 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:35:34.732252  585113 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:35:34.732336  585113 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 20:35:34.732428  585113 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:35:35.125135  585113 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:35:35.188591  585113 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 20:35:35.330713  585113 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:35:35.497785  585113 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:35:35.839010  585113 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:35:35.839656  585113 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:35:35.842311  585113 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:35:32.118898  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:34.119153  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:34.164007  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:36.164138  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:35.844403  585113 out.go:235]   - Booting up control plane ...
	I1205 20:35:35.844534  585113 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:35:35.844602  585113 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:35:35.845242  585113 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:35:35.865676  585113 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:35:35.871729  585113 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:35:35.871825  585113 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:35:36.007728  585113 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 20:35:36.007948  585113 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 20:35:36.510090  585113 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.141078ms
	I1205 20:35:36.510208  585113 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 20:35:36.119432  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:38.121093  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:40.620523  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:41.512166  585113 kubeadm.go:310] [api-check] The API server is healthy after 5.00243802s
	I1205 20:35:41.529257  585113 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:35:41.545958  585113 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:35:41.585500  585113 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:35:41.585726  585113 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-789000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:35:41.606394  585113 kubeadm.go:310] [bootstrap-token] Using token: j30n5x.myrhz9pya6yl1f1z
	I1205 20:35:41.608046  585113 out.go:235]   - Configuring RBAC rules ...
	I1205 20:35:41.608229  585113 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:35:41.616083  585113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:35:41.625777  585113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:35:41.629934  585113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:35:41.633726  585113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:35:41.640454  585113 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:35:41.923125  585113 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:35:42.363841  585113 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 20:35:42.924569  585113 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 20:35:42.924594  585113 kubeadm.go:310] 
	I1205 20:35:42.924660  585113 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 20:35:42.924668  585113 kubeadm.go:310] 
	I1205 20:35:42.924750  585113 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 20:35:42.924768  585113 kubeadm.go:310] 
	I1205 20:35:42.924802  585113 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 20:35:42.924865  585113 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:35:42.924926  585113 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:35:42.924969  585113 kubeadm.go:310] 
	I1205 20:35:42.925060  585113 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 20:35:42.925069  585113 kubeadm.go:310] 
	I1205 20:35:42.925120  585113 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:35:42.925154  585113 kubeadm.go:310] 
	I1205 20:35:42.925255  585113 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 20:35:42.925374  585113 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:35:42.925477  585113 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:35:42.925488  585113 kubeadm.go:310] 
	I1205 20:35:42.925604  585113 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:35:42.925691  585113 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 20:35:42.925701  585113 kubeadm.go:310] 
	I1205 20:35:42.925830  585113 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token j30n5x.myrhz9pya6yl1f1z \
	I1205 20:35:42.925966  585113 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 \
	I1205 20:35:42.926019  585113 kubeadm.go:310] 	--control-plane 
	I1205 20:35:42.926034  585113 kubeadm.go:310] 
	I1205 20:35:42.926136  585113 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:35:42.926147  585113 kubeadm.go:310] 
	I1205 20:35:42.926258  585113 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token j30n5x.myrhz9pya6yl1f1z \
	I1205 20:35:42.926400  585113 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 
	I1205 20:35:42.927105  585113 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:35:42.927269  585113 cni.go:84] Creating CNI manager for ""
	I1205 20:35:42.927283  585113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:35:42.929046  585113 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:35:38.164698  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:40.665499  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:42.930620  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:35:42.941706  585113 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:35:42.964041  585113 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:35:42.964154  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:42.964191  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-789000 minikube.k8s.io/updated_at=2024_12_05T20_35_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=embed-certs-789000 minikube.k8s.io/primary=true
	I1205 20:35:43.027876  585113 ops.go:34] apiserver oom_adj: -16
	I1205 20:35:43.203087  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:43.703446  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:44.203895  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:44.703277  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:45.203421  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:42.623820  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:45.118957  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:45.704129  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:46.203682  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:46.703213  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:47.203225  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:47.330051  585113 kubeadm.go:1113] duration metric: took 4.365966546s to wait for elevateKubeSystemPrivileges
	I1205 20:35:47.330104  585113 kubeadm.go:394] duration metric: took 4m57.530103825s to StartCluster
	I1205 20:35:47.330143  585113 settings.go:142] acquiring lock: {Name:mk53b9e6d652790a330d8f10370186624dd74692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:35:47.330296  585113 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:35:47.332937  585113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:35:47.333273  585113 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:35:47.333380  585113 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 20:35:47.333478  585113 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-789000"
	I1205 20:35:47.333500  585113 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-789000"
	I1205 20:35:47.333499  585113 addons.go:69] Setting default-storageclass=true in profile "embed-certs-789000"
	W1205 20:35:47.333510  585113 addons.go:243] addon storage-provisioner should already be in state true
	I1205 20:35:47.333523  585113 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-789000"
	I1205 20:35:47.333545  585113 host.go:66] Checking if "embed-certs-789000" exists ...
	I1205 20:35:47.333554  585113 config.go:182] Loaded profile config "embed-certs-789000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:35:47.333631  585113 addons.go:69] Setting metrics-server=true in profile "embed-certs-789000"
	I1205 20:35:47.333651  585113 addons.go:234] Setting addon metrics-server=true in "embed-certs-789000"
	W1205 20:35:47.333660  585113 addons.go:243] addon metrics-server should already be in state true
	I1205 20:35:47.333692  585113 host.go:66] Checking if "embed-certs-789000" exists ...
	I1205 20:35:47.334001  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.334043  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.334003  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.334101  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.334157  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.334339  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.335448  585113 out.go:177] * Verifying Kubernetes components...
	I1205 20:35:47.337056  585113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:35:47.353039  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33827
	I1205 20:35:47.353726  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.354437  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.354467  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.354870  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.355580  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.355654  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.355702  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43665
	I1205 20:35:47.355760  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46205
	I1205 20:35:47.356180  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.356224  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.356771  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.356796  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.356815  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.356834  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.357246  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.357245  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.357640  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetState
	I1205 20:35:47.357862  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.357916  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.361951  585113 addons.go:234] Setting addon default-storageclass=true in "embed-certs-789000"
	W1205 20:35:47.361974  585113 addons.go:243] addon default-storageclass should already be in state true
	I1205 20:35:47.362004  585113 host.go:66] Checking if "embed-certs-789000" exists ...
	I1205 20:35:47.362369  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.362416  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.372862  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37823
	I1205 20:35:47.373465  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.373983  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.374011  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.374347  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.374570  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetState
	I1205 20:35:47.376329  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:35:47.378476  585113 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:35:47.379882  585113 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:35:47.379909  585113 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:35:47.379933  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:35:47.382045  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44707
	I1205 20:35:47.382855  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.383440  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.383459  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.383563  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.383828  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.384092  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetState
	I1205 20:35:47.384101  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:35:47.384117  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.384150  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39829
	I1205 20:35:47.384381  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:35:47.384517  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:35:47.384635  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.384705  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:35:47.384850  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:35:47.385249  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.385262  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.385613  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.385744  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:35:47.386054  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.386085  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.387649  585113 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:35:43.164980  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:45.665449  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:47.665725  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:47.388998  585113 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:35:47.389011  585113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:35:47.389025  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:35:47.391724  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.392285  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:35:47.392317  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.392362  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:35:47.392521  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:35:47.392663  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:35:47.392804  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:35:47.402558  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45343
	I1205 20:35:47.403109  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.403636  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.403653  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.403977  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.404155  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetState
	I1205 20:35:47.405636  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:35:47.405859  585113 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:35:47.405876  585113 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:35:47.405894  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:35:47.408366  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.408827  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:35:47.408868  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.409107  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:35:47.409276  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:35:47.409436  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:35:47.409577  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:35:47.589046  585113 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:35:47.620164  585113 node_ready.go:35] waiting up to 6m0s for node "embed-certs-789000" to be "Ready" ...
	I1205 20:35:47.635800  585113 node_ready.go:49] node "embed-certs-789000" has status "Ready":"True"
	I1205 20:35:47.635824  585113 node_ready.go:38] duration metric: took 15.625152ms for node "embed-certs-789000" to be "Ready" ...
	I1205 20:35:47.635836  585113 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:35:47.647842  585113 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6mp2h" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:47.738529  585113 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:35:47.738558  585113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:35:47.741247  585113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:35:47.741443  585113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:35:47.822503  585113 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:35:47.822543  585113 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:35:47.886482  585113 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:35:47.886512  585113 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:35:47.926018  585113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:35:48.100013  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:48.100059  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:48.100371  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:48.100392  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:48.100408  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:48.100416  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:48.102261  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Closing plugin on server side
	I1205 20:35:48.102313  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:48.102342  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:48.115407  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:48.115429  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:48.115762  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:48.115859  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:48.115870  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Closing plugin on server side
	I1205 20:35:48.721035  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:48.721068  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:48.721380  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:48.721400  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:48.721447  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:48.721465  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:48.721855  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Closing plugin on server side
	I1205 20:35:48.721868  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:48.721880  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:49.294512  585113 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.36844122s)
	I1205 20:35:49.294581  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:49.294598  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:49.294953  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Closing plugin on server side
	I1205 20:35:49.295014  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:49.295028  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:49.295057  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:49.295071  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:49.295341  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Closing plugin on server side
	I1205 20:35:49.295391  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:49.295403  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:49.295414  585113 addons.go:475] Verifying addon metrics-server=true in "embed-certs-789000"
	I1205 20:35:49.297183  585113 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:35:49.298509  585113 addons.go:510] duration metric: took 1.965140064s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:35:49.657195  585113 pod_ready.go:103] pod "coredns-7c65d6cfc9-6mp2h" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:47.121445  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:49.622568  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:50.163712  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:52.165654  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:52.155012  585113 pod_ready.go:103] pod "coredns-7c65d6cfc9-6mp2h" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:54.155309  585113 pod_ready.go:93] pod "coredns-7c65d6cfc9-6mp2h" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:54.155346  585113 pod_ready.go:82] duration metric: took 6.507465102s for pod "coredns-7c65d6cfc9-6mp2h" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:54.155356  585113 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rh6pj" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:54.160866  585113 pod_ready.go:93] pod "coredns-7c65d6cfc9-rh6pj" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:54.160895  585113 pod_ready.go:82] duration metric: took 5.529623ms for pod "coredns-7c65d6cfc9-rh6pj" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:54.160909  585113 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:54.166444  585113 pod_ready.go:93] pod "etcd-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:54.166475  585113 pod_ready.go:82] duration metric: took 5.558605ms for pod "etcd-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:54.166487  585113 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:52.118202  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:54.119543  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:54.664661  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:57.162802  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:56.172832  585113 pod_ready.go:103] pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:57.173005  585113 pod_ready.go:93] pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:57.173052  585113 pod_ready.go:82] duration metric: took 3.006542827s for pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.173068  585113 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.178461  585113 pod_ready.go:93] pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:57.178489  585113 pod_ready.go:82] duration metric: took 5.413563ms for pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.178499  585113 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-znjpk" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.183130  585113 pod_ready.go:93] pod "kube-proxy-znjpk" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:57.183162  585113 pod_ready.go:82] duration metric: took 4.655743ms for pod "kube-proxy-znjpk" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.183178  585113 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.351816  585113 pod_ready.go:93] pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:57.351842  585113 pod_ready.go:82] duration metric: took 168.656328ms for pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.351851  585113 pod_ready.go:39] duration metric: took 9.716003373s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:35:57.351866  585113 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:35:57.351921  585113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:57.368439  585113 api_server.go:72] duration metric: took 10.035127798s to wait for apiserver process to appear ...
	I1205 20:35:57.368471  585113 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:35:57.368496  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:35:57.372531  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I1205 20:35:57.373449  585113 api_server.go:141] control plane version: v1.31.2
	I1205 20:35:57.373466  585113 api_server.go:131] duration metric: took 4.987422ms to wait for apiserver health ...
	I1205 20:35:57.373474  585113 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:35:57.554591  585113 system_pods.go:59] 9 kube-system pods found
	I1205 20:35:57.554620  585113 system_pods.go:61] "coredns-7c65d6cfc9-6mp2h" [01aaefd9-c549-4065-b3dd-a0e4d925e592] Running
	I1205 20:35:57.554625  585113 system_pods.go:61] "coredns-7c65d6cfc9-rh6pj" [4bdd8a47-abec-4dc4-a1ed-4a9a124417a3] Running
	I1205 20:35:57.554629  585113 system_pods.go:61] "etcd-embed-certs-789000" [356d7981-ab7a-40bf-866f-0285986f9a8d] Running
	I1205 20:35:57.554633  585113 system_pods.go:61] "kube-apiserver-embed-certs-789000" [bddc43d8-26f1-462b-a90b-8a4093bbb427] Running
	I1205 20:35:57.554637  585113 system_pods.go:61] "kube-controller-manager-embed-certs-789000" [800f92d7-e6e2-4cb8-9cc7-90595f4b512b] Running
	I1205 20:35:57.554640  585113 system_pods.go:61] "kube-proxy-znjpk" [f3df1a22-d7e0-4a83-84dd-0e710185ded6] Running
	I1205 20:35:57.554643  585113 system_pods.go:61] "kube-scheduler-embed-certs-789000" [327e3f02-3092-49fb-bfac-fc0485f02db3] Running
	I1205 20:35:57.554649  585113 system_pods.go:61] "metrics-server-6867b74b74-cs42k" [98b266c3-8ff0-4dc6-9c43-374dcd7c074a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:35:57.554653  585113 system_pods.go:61] "storage-provisioner" [2808c8da-8904-45a0-ae68-bfd68681540f] Running
	I1205 20:35:57.554660  585113 system_pods.go:74] duration metric: took 181.180919ms to wait for pod list to return data ...
	I1205 20:35:57.554667  585113 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:35:57.757196  585113 default_sa.go:45] found service account: "default"
	I1205 20:35:57.757226  585113 default_sa.go:55] duration metric: took 202.553823ms for default service account to be created ...
	I1205 20:35:57.757236  585113 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:35:57.956943  585113 system_pods.go:86] 9 kube-system pods found
	I1205 20:35:57.956976  585113 system_pods.go:89] "coredns-7c65d6cfc9-6mp2h" [01aaefd9-c549-4065-b3dd-a0e4d925e592] Running
	I1205 20:35:57.956982  585113 system_pods.go:89] "coredns-7c65d6cfc9-rh6pj" [4bdd8a47-abec-4dc4-a1ed-4a9a124417a3] Running
	I1205 20:35:57.956985  585113 system_pods.go:89] "etcd-embed-certs-789000" [356d7981-ab7a-40bf-866f-0285986f9a8d] Running
	I1205 20:35:57.956989  585113 system_pods.go:89] "kube-apiserver-embed-certs-789000" [bddc43d8-26f1-462b-a90b-8a4093bbb427] Running
	I1205 20:35:57.956992  585113 system_pods.go:89] "kube-controller-manager-embed-certs-789000" [800f92d7-e6e2-4cb8-9cc7-90595f4b512b] Running
	I1205 20:35:57.956996  585113 system_pods.go:89] "kube-proxy-znjpk" [f3df1a22-d7e0-4a83-84dd-0e710185ded6] Running
	I1205 20:35:57.956999  585113 system_pods.go:89] "kube-scheduler-embed-certs-789000" [327e3f02-3092-49fb-bfac-fc0485f02db3] Running
	I1205 20:35:57.957005  585113 system_pods.go:89] "metrics-server-6867b74b74-cs42k" [98b266c3-8ff0-4dc6-9c43-374dcd7c074a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:35:57.957010  585113 system_pods.go:89] "storage-provisioner" [2808c8da-8904-45a0-ae68-bfd68681540f] Running
	I1205 20:35:57.957019  585113 system_pods.go:126] duration metric: took 199.777723ms to wait for k8s-apps to be running ...
	I1205 20:35:57.957028  585113 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:35:57.957079  585113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:35:57.971959  585113 system_svc.go:56] duration metric: took 14.916307ms WaitForService to wait for kubelet
	I1205 20:35:57.972000  585113 kubeadm.go:582] duration metric: took 10.638693638s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:35:57.972027  585113 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:35:58.153272  585113 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:35:58.153302  585113 node_conditions.go:123] node cpu capacity is 2
	I1205 20:35:58.153323  585113 node_conditions.go:105] duration metric: took 181.282208ms to run NodePressure ...
	I1205 20:35:58.153338  585113 start.go:241] waiting for startup goroutines ...
	I1205 20:35:58.153348  585113 start.go:246] waiting for cluster config update ...
	I1205 20:35:58.153361  585113 start.go:255] writing updated cluster config ...
	I1205 20:35:58.153689  585113 ssh_runner.go:195] Run: rm -f paused
	I1205 20:35:58.206377  585113 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 20:35:58.208199  585113 out.go:177] * Done! kubectl is now configured to use "embed-certs-789000" cluster and "default" namespace by default
	I1205 20:35:56.626799  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:59.119621  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:59.164803  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:01.663254  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:01.119680  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:03.121023  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:05.121537  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:02.025194  585602 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 20:36:02.025306  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:36:02.025498  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:36:03.664172  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:05.672410  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:07.623229  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:10.119845  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:07.025608  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:36:07.025922  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:36:08.164875  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:10.665374  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:12.622566  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:15.120084  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:13.163662  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:15.164021  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:17.164514  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:17.619629  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:19.620524  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:17.026490  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:36:17.026747  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:36:19.663904  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:22.164514  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:21.621019  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:24.119524  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:24.164932  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:26.670748  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:26.119795  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:27.113870  585025 pod_ready.go:82] duration metric: took 4m0.000886242s for pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace to be "Ready" ...
	E1205 20:36:27.113920  585025 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace to be "Ready" (will not retry!)
	I1205 20:36:27.113943  585025 pod_ready.go:39] duration metric: took 4m14.547292745s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:36:27.113975  585025 kubeadm.go:597] duration metric: took 4m21.939840666s to restartPrimaryControlPlane
	W1205 20:36:27.114068  585025 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 20:36:27.114099  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:36:29.163499  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:29.664158  585929 pod_ready.go:82] duration metric: took 4m0.007168384s for pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace to be "Ready" ...
	E1205 20:36:29.664191  585929 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 20:36:29.664201  585929 pod_ready.go:39] duration metric: took 4m2.00733866s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:36:29.664226  585929 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:36:29.664290  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:36:29.664377  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:36:29.712790  585929 cri.go:89] found id: "83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:29.712814  585929 cri.go:89] found id: "e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:29.712819  585929 cri.go:89] found id: ""
	I1205 20:36:29.712826  585929 logs.go:282] 2 containers: [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36]
	I1205 20:36:29.712879  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.717751  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.721968  585929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:36:29.722045  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:36:29.770289  585929 cri.go:89] found id: "62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:29.770322  585929 cri.go:89] found id: ""
	I1205 20:36:29.770330  585929 logs.go:282] 1 containers: [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff]
	I1205 20:36:29.770392  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.775391  585929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:36:29.775475  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:36:29.816354  585929 cri.go:89] found id: "dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:29.816380  585929 cri.go:89] found id: ""
	I1205 20:36:29.816388  585929 logs.go:282] 1 containers: [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f]
	I1205 20:36:29.816454  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.821546  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:36:29.821621  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:36:29.870442  585929 cri.go:89] found id: "40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:29.870467  585929 cri.go:89] found id: ""
	I1205 20:36:29.870476  585929 logs.go:282] 1 containers: [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d]
	I1205 20:36:29.870541  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.875546  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:36:29.875658  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:36:29.924567  585929 cri.go:89] found id: "444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:29.924595  585929 cri.go:89] found id: ""
	I1205 20:36:29.924603  585929 logs.go:282] 1 containers: [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43]
	I1205 20:36:29.924666  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.929148  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:36:29.929216  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:36:29.968092  585929 cri.go:89] found id: "18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:29.968122  585929 cri.go:89] found id: "587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:29.968126  585929 cri.go:89] found id: ""
	I1205 20:36:29.968134  585929 logs.go:282] 2 containers: [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66]
	I1205 20:36:29.968186  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.973062  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.977693  585929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:36:29.977762  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:36:30.014944  585929 cri.go:89] found id: ""
	I1205 20:36:30.014982  585929 logs.go:282] 0 containers: []
	W1205 20:36:30.014994  585929 logs.go:284] No container was found matching "kindnet"
	I1205 20:36:30.015002  585929 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:36:30.015101  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:36:30.062304  585929 cri.go:89] found id: "e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:30.062328  585929 cri.go:89] found id: "dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:30.062332  585929 cri.go:89] found id: ""
	I1205 20:36:30.062339  585929 logs.go:282] 2 containers: [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c]
	I1205 20:36:30.062394  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:30.067152  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:30.071767  585929 logs.go:123] Gathering logs for kube-apiserver [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d] ...
	I1205 20:36:30.071788  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:30.125030  585929 logs.go:123] Gathering logs for etcd [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff] ...
	I1205 20:36:30.125069  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:30.167607  585929 logs.go:123] Gathering logs for kube-scheduler [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d] ...
	I1205 20:36:30.167641  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:30.217522  585929 logs.go:123] Gathering logs for kube-controller-manager [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c] ...
	I1205 20:36:30.217558  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:30.298655  585929 logs.go:123] Gathering logs for kube-controller-manager [587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66] ...
	I1205 20:36:30.298695  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:30.346687  585929 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:36:30.346721  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:36:30.887069  585929 logs.go:123] Gathering logs for dmesg ...
	I1205 20:36:30.887126  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:36:30.907313  585929 logs.go:123] Gathering logs for kube-apiserver [e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36] ...
	I1205 20:36:30.907360  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:30.950285  585929 logs.go:123] Gathering logs for coredns [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f] ...
	I1205 20:36:30.950326  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:30.990895  585929 logs.go:123] Gathering logs for storage-provisioner [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8] ...
	I1205 20:36:30.990929  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:31.032950  585929 logs.go:123] Gathering logs for kubelet ...
	I1205 20:36:31.033010  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:36:31.115132  585929 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:36:31.115176  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:36:31.257760  585929 logs.go:123] Gathering logs for kube-proxy [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43] ...
	I1205 20:36:31.257797  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:31.300521  585929 logs.go:123] Gathering logs for storage-provisioner [dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c] ...
	I1205 20:36:31.300553  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:31.338339  585929 logs.go:123] Gathering logs for container status ...
	I1205 20:36:31.338373  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:36:33.892406  585929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:36:33.908917  585929 api_server.go:72] duration metric: took 4m14.472283422s to wait for apiserver process to appear ...
	I1205 20:36:33.908950  585929 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:36:33.908993  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:36:33.909067  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:36:33.958461  585929 cri.go:89] found id: "83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:33.958496  585929 cri.go:89] found id: "e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:33.958502  585929 cri.go:89] found id: ""
	I1205 20:36:33.958511  585929 logs.go:282] 2 containers: [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36]
	I1205 20:36:33.958585  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:33.963333  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:33.969472  585929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:36:33.969549  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:36:34.010687  585929 cri.go:89] found id: "62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:34.010711  585929 cri.go:89] found id: ""
	I1205 20:36:34.010721  585929 logs.go:282] 1 containers: [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff]
	I1205 20:36:34.010790  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.016468  585929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:36:34.016557  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:36:34.056627  585929 cri.go:89] found id: "dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:34.056656  585929 cri.go:89] found id: ""
	I1205 20:36:34.056666  585929 logs.go:282] 1 containers: [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f]
	I1205 20:36:34.056729  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.061343  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:36:34.061411  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:36:34.099534  585929 cri.go:89] found id: "40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:34.099563  585929 cri.go:89] found id: ""
	I1205 20:36:34.099573  585929 logs.go:282] 1 containers: [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d]
	I1205 20:36:34.099643  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.104828  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:36:34.104891  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:36:34.150749  585929 cri.go:89] found id: "444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:34.150781  585929 cri.go:89] found id: ""
	I1205 20:36:34.150792  585929 logs.go:282] 1 containers: [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43]
	I1205 20:36:34.150863  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.155718  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:36:34.155797  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:36:34.202896  585929 cri.go:89] found id: "18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:34.202927  585929 cri.go:89] found id: "587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:34.202934  585929 cri.go:89] found id: ""
	I1205 20:36:34.202943  585929 logs.go:282] 2 containers: [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66]
	I1205 20:36:34.203028  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.207791  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.212163  585929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:36:34.212243  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:36:34.254423  585929 cri.go:89] found id: ""
	I1205 20:36:34.254458  585929 logs.go:282] 0 containers: []
	W1205 20:36:34.254470  585929 logs.go:284] No container was found matching "kindnet"
	I1205 20:36:34.254479  585929 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:36:34.254549  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:36:34.294704  585929 cri.go:89] found id: "e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:34.294737  585929 cri.go:89] found id: "dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:34.294741  585929 cri.go:89] found id: ""
	I1205 20:36:34.294753  585929 logs.go:282] 2 containers: [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c]
	I1205 20:36:34.294820  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.299361  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.305411  585929 logs.go:123] Gathering logs for kube-apiserver [e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36] ...
	I1205 20:36:34.305437  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:34.357438  585929 logs.go:123] Gathering logs for etcd [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff] ...
	I1205 20:36:34.357472  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:34.405858  585929 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:36:34.405893  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:36:34.898506  585929 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:36:34.898551  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:36:35.009818  585929 logs.go:123] Gathering logs for coredns [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f] ...
	I1205 20:36:35.009856  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:35.048852  585929 logs.go:123] Gathering logs for kube-controller-manager [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c] ...
	I1205 20:36:35.048882  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:35.100458  585929 logs.go:123] Gathering logs for kube-controller-manager [587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66] ...
	I1205 20:36:35.100511  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:35.139923  585929 logs.go:123] Gathering logs for container status ...
	I1205 20:36:35.139959  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:36:35.184818  585929 logs.go:123] Gathering logs for kubelet ...
	I1205 20:36:35.184852  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:36:35.265196  585929 logs.go:123] Gathering logs for dmesg ...
	I1205 20:36:35.265238  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:36:35.280790  585929 logs.go:123] Gathering logs for kube-proxy [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43] ...
	I1205 20:36:35.280830  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:35.323308  585929 logs.go:123] Gathering logs for storage-provisioner [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8] ...
	I1205 20:36:35.323343  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:35.364578  585929 logs.go:123] Gathering logs for kube-apiserver [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d] ...
	I1205 20:36:35.364610  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:35.411413  585929 logs.go:123] Gathering logs for kube-scheduler [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d] ...
	I1205 20:36:35.411456  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:35.458077  585929 logs.go:123] Gathering logs for storage-provisioner [dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c] ...
	I1205 20:36:35.458117  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:37.997701  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:36:38.003308  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 200:
	ok
	I1205 20:36:38.004465  585929 api_server.go:141] control plane version: v1.31.2
	I1205 20:36:38.004495  585929 api_server.go:131] duration metric: took 4.095536578s to wait for apiserver health ...
	I1205 20:36:38.004505  585929 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:36:38.004532  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:36:38.004598  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:36:37.027599  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:36:37.027910  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:36:38.048388  585929 cri.go:89] found id: "83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:38.048427  585929 cri.go:89] found id: "e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:38.048434  585929 cri.go:89] found id: ""
	I1205 20:36:38.048442  585929 logs.go:282] 2 containers: [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36]
	I1205 20:36:38.048514  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.052931  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.057338  585929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:36:38.057403  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:36:38.097715  585929 cri.go:89] found id: "62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:38.097750  585929 cri.go:89] found id: ""
	I1205 20:36:38.097761  585929 logs.go:282] 1 containers: [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff]
	I1205 20:36:38.097830  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.104038  585929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:36:38.104110  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:36:38.148485  585929 cri.go:89] found id: "dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:38.148510  585929 cri.go:89] found id: ""
	I1205 20:36:38.148519  585929 logs.go:282] 1 containers: [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f]
	I1205 20:36:38.148585  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.153619  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:36:38.153702  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:36:38.190467  585929 cri.go:89] found id: "40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:38.190495  585929 cri.go:89] found id: ""
	I1205 20:36:38.190505  585929 logs.go:282] 1 containers: [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d]
	I1205 20:36:38.190561  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.195177  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:36:38.195259  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:36:38.240020  585929 cri.go:89] found id: "444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:38.240045  585929 cri.go:89] found id: ""
	I1205 20:36:38.240054  585929 logs.go:282] 1 containers: [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43]
	I1205 20:36:38.240123  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.244359  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:36:38.244425  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:36:38.282241  585929 cri.go:89] found id: "18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:38.282267  585929 cri.go:89] found id: "587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:38.282284  585929 cri.go:89] found id: ""
	I1205 20:36:38.282292  585929 logs.go:282] 2 containers: [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66]
	I1205 20:36:38.282357  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.287437  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.291561  585929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:36:38.291621  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:36:38.333299  585929 cri.go:89] found id: ""
	I1205 20:36:38.333335  585929 logs.go:282] 0 containers: []
	W1205 20:36:38.333345  585929 logs.go:284] No container was found matching "kindnet"
	I1205 20:36:38.333352  585929 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:36:38.333411  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:36:38.370920  585929 cri.go:89] found id: "e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:38.370948  585929 cri.go:89] found id: "dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:38.370952  585929 cri.go:89] found id: ""
	I1205 20:36:38.370960  585929 logs.go:282] 2 containers: [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c]
	I1205 20:36:38.371037  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.375549  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.379517  585929 logs.go:123] Gathering logs for kube-controller-manager [587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66] ...
	I1205 20:36:38.379548  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:38.416990  585929 logs.go:123] Gathering logs for kubelet ...
	I1205 20:36:38.417023  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:36:38.499859  585929 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:36:38.499905  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:36:38.625291  585929 logs.go:123] Gathering logs for kube-scheduler [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d] ...
	I1205 20:36:38.625332  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:38.672549  585929 logs.go:123] Gathering logs for coredns [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f] ...
	I1205 20:36:38.672586  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:38.710017  585929 logs.go:123] Gathering logs for storage-provisioner [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8] ...
	I1205 20:36:38.710055  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:38.754004  585929 logs.go:123] Gathering logs for container status ...
	I1205 20:36:38.754049  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:36:38.802163  585929 logs.go:123] Gathering logs for dmesg ...
	I1205 20:36:38.802206  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:36:38.817670  585929 logs.go:123] Gathering logs for kube-apiserver [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d] ...
	I1205 20:36:38.817704  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:38.864833  585929 logs.go:123] Gathering logs for etcd [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff] ...
	I1205 20:36:38.864875  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:38.909490  585929 logs.go:123] Gathering logs for storage-provisioner [dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c] ...
	I1205 20:36:38.909526  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:38.952117  585929 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:36:38.952164  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:36:39.347620  585929 logs.go:123] Gathering logs for kube-apiserver [e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36] ...
	I1205 20:36:39.347686  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:39.392412  585929 logs.go:123] Gathering logs for kube-proxy [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43] ...
	I1205 20:36:39.392450  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:39.433711  585929 logs.go:123] Gathering logs for kube-controller-manager [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c] ...
	I1205 20:36:39.433749  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:41.996602  585929 system_pods.go:59] 8 kube-system pods found
	I1205 20:36:41.996634  585929 system_pods.go:61] "coredns-7c65d6cfc9-5drgc" [4adbcbc8-0974-4ed3-90d4-fc7f75ff83b6] Running
	I1205 20:36:41.996640  585929 system_pods.go:61] "etcd-default-k8s-diff-port-942599" [4041a965-abf4-45b3-a180-118601e72573] Running
	I1205 20:36:41.996644  585929 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-942599" [ae1d7788-4feb-4e02-b0b2-bcaff984ff99] Running
	I1205 20:36:41.996648  585929 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-942599" [5cfb734e-5a10-4066-95a1-b884817a0aea] Running
	I1205 20:36:41.996651  585929 system_pods.go:61] "kube-proxy-5vdcq" [be2e18fd-6980-45c9-87a4-f6d1ed31bf7b] Running
	I1205 20:36:41.996654  585929 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-942599" [8deda727-a6c3-4523-8755-76217f6a8ddb] Running
	I1205 20:36:41.996661  585929 system_pods.go:61] "metrics-server-6867b74b74-rq8xm" [99b577fd-fbfd-4178-8b06-ef96f118c30b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:36:41.996665  585929 system_pods.go:61] "storage-provisioner" [8a858ec2-dc10-4501-8efa-72e2ea0c7927] Running
	I1205 20:36:41.996674  585929 system_pods.go:74] duration metric: took 3.992162062s to wait for pod list to return data ...
	I1205 20:36:41.996682  585929 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:36:41.999553  585929 default_sa.go:45] found service account: "default"
	I1205 20:36:41.999580  585929 default_sa.go:55] duration metric: took 2.889197ms for default service account to be created ...
	I1205 20:36:41.999589  585929 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:36:42.005061  585929 system_pods.go:86] 8 kube-system pods found
	I1205 20:36:42.005099  585929 system_pods.go:89] "coredns-7c65d6cfc9-5drgc" [4adbcbc8-0974-4ed3-90d4-fc7f75ff83b6] Running
	I1205 20:36:42.005111  585929 system_pods.go:89] "etcd-default-k8s-diff-port-942599" [4041a965-abf4-45b3-a180-118601e72573] Running
	I1205 20:36:42.005118  585929 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-942599" [ae1d7788-4feb-4e02-b0b2-bcaff984ff99] Running
	I1205 20:36:42.005126  585929 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-942599" [5cfb734e-5a10-4066-95a1-b884817a0aea] Running
	I1205 20:36:42.005135  585929 system_pods.go:89] "kube-proxy-5vdcq" [be2e18fd-6980-45c9-87a4-f6d1ed31bf7b] Running
	I1205 20:36:42.005143  585929 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-942599" [8deda727-a6c3-4523-8755-76217f6a8ddb] Running
	I1205 20:36:42.005159  585929 system_pods.go:89] "metrics-server-6867b74b74-rq8xm" [99b577fd-fbfd-4178-8b06-ef96f118c30b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:36:42.005171  585929 system_pods.go:89] "storage-provisioner" [8a858ec2-dc10-4501-8efa-72e2ea0c7927] Running
	I1205 20:36:42.005187  585929 system_pods.go:126] duration metric: took 5.591652ms to wait for k8s-apps to be running ...
	I1205 20:36:42.005201  585929 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:36:42.005267  585929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:36:42.021323  585929 system_svc.go:56] duration metric: took 16.10852ms WaitForService to wait for kubelet
	I1205 20:36:42.021358  585929 kubeadm.go:582] duration metric: took 4m22.584731606s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:36:42.021424  585929 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:36:42.024632  585929 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:36:42.024658  585929 node_conditions.go:123] node cpu capacity is 2
	I1205 20:36:42.024682  585929 node_conditions.go:105] duration metric: took 3.248548ms to run NodePressure ...
	I1205 20:36:42.024698  585929 start.go:241] waiting for startup goroutines ...
	I1205 20:36:42.024709  585929 start.go:246] waiting for cluster config update ...
	I1205 20:36:42.024742  585929 start.go:255] writing updated cluster config ...
	I1205 20:36:42.025047  585929 ssh_runner.go:195] Run: rm -f paused
	I1205 20:36:42.077303  585929 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 20:36:42.079398  585929 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-942599" cluster and "default" namespace by default
	I1205 20:36:53.411276  585025 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.297141231s)
	I1205 20:36:53.411423  585025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:36:53.432474  585025 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:36:53.443908  585025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:36:53.454789  585025 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:36:53.454821  585025 kubeadm.go:157] found existing configuration files:
	
	I1205 20:36:53.454873  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:36:53.465648  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:36:53.465719  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:36:53.476492  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:36:53.486436  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:36:53.486505  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:36:53.499146  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:36:53.510237  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:36:53.510324  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:36:53.521186  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:36:53.531797  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:36:53.531890  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:36:53.543056  585025 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:36:53.735019  585025 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:37:01.531096  585025 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 20:37:01.531179  585025 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:37:01.531278  585025 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:37:01.531407  585025 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:37:01.531546  585025 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 20:37:01.531635  585025 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:37:01.533284  585025 out.go:235]   - Generating certificates and keys ...
	I1205 20:37:01.533400  585025 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:37:01.533484  585025 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:37:01.533589  585025 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:37:01.533676  585025 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:37:01.533741  585025 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:37:01.533820  585025 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 20:37:01.533901  585025 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:37:01.533954  585025 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:37:01.534023  585025 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:37:01.534097  585025 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:37:01.534137  585025 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 20:37:01.534193  585025 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:37:01.534264  585025 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:37:01.534347  585025 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 20:37:01.534414  585025 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:37:01.534479  585025 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:37:01.534529  585025 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:37:01.534600  585025 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:37:01.534656  585025 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:37:01.536208  585025 out.go:235]   - Booting up control plane ...
	I1205 20:37:01.536326  585025 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:37:01.536394  585025 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:37:01.536487  585025 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:37:01.536653  585025 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:37:01.536772  585025 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:37:01.536814  585025 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:37:01.536987  585025 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 20:37:01.537144  585025 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 20:37:01.537240  585025 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.640403ms
	I1205 20:37:01.537352  585025 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 20:37:01.537438  585025 kubeadm.go:310] [api-check] The API server is healthy after 5.002069704s
	I1205 20:37:01.537566  585025 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:37:01.537705  585025 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:37:01.537766  585025 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:37:01.537959  585025 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-816185 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:37:01.538037  585025 kubeadm.go:310] [bootstrap-token] Using token: l8cx4j.koqnwrdaqrc08irs
	I1205 20:37:01.539683  585025 out.go:235]   - Configuring RBAC rules ...
	I1205 20:37:01.539813  585025 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:37:01.539945  585025 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:37:01.540157  585025 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:37:01.540346  585025 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:37:01.540482  585025 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:37:01.540602  585025 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:37:01.540746  585025 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:37:01.540818  585025 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 20:37:01.540905  585025 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 20:37:01.540922  585025 kubeadm.go:310] 
	I1205 20:37:01.541012  585025 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 20:37:01.541027  585025 kubeadm.go:310] 
	I1205 20:37:01.541149  585025 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 20:37:01.541160  585025 kubeadm.go:310] 
	I1205 20:37:01.541197  585025 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 20:37:01.541253  585025 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:37:01.541297  585025 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:37:01.541303  585025 kubeadm.go:310] 
	I1205 20:37:01.541365  585025 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 20:37:01.541371  585025 kubeadm.go:310] 
	I1205 20:37:01.541417  585025 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:37:01.541427  585025 kubeadm.go:310] 
	I1205 20:37:01.541486  585025 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 20:37:01.541593  585025 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:37:01.541689  585025 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:37:01.541707  585025 kubeadm.go:310] 
	I1205 20:37:01.541811  585025 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:37:01.541917  585025 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 20:37:01.541928  585025 kubeadm.go:310] 
	I1205 20:37:01.542020  585025 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token l8cx4j.koqnwrdaqrc08irs \
	I1205 20:37:01.542138  585025 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 \
	I1205 20:37:01.542171  585025 kubeadm.go:310] 	--control-plane 
	I1205 20:37:01.542180  585025 kubeadm.go:310] 
	I1205 20:37:01.542264  585025 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:37:01.542283  585025 kubeadm.go:310] 
	I1205 20:37:01.542407  585025 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token l8cx4j.koqnwrdaqrc08irs \
	I1205 20:37:01.542513  585025 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 
	I1205 20:37:01.542530  585025 cni.go:84] Creating CNI manager for ""
	I1205 20:37:01.542538  585025 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:37:01.543967  585025 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:37:01.545652  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:37:01.557890  585025 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:37:01.577447  585025 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:37:01.577532  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-816185 minikube.k8s.io/updated_at=2024_12_05T20_37_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=no-preload-816185 minikube.k8s.io/primary=true
	I1205 20:37:01.577542  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:01.618121  585025 ops.go:34] apiserver oom_adj: -16
	I1205 20:37:01.806825  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:02.307212  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:02.807893  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:03.307202  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:03.806891  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:04.307571  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:04.807485  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:05.307695  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:05.387751  585025 kubeadm.go:1113] duration metric: took 3.810307917s to wait for elevateKubeSystemPrivileges
	I1205 20:37:05.387790  585025 kubeadm.go:394] duration metric: took 5m0.269375789s to StartCluster
	I1205 20:37:05.387810  585025 settings.go:142] acquiring lock: {Name:mk53b9e6d652790a330d8f10370186624dd74692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:37:05.387891  585025 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:37:05.389703  585025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:37:05.389984  585025 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.37 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:37:05.390056  585025 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 20:37:05.390179  585025 config.go:182] Loaded profile config "no-preload-816185": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:37:05.390193  585025 addons.go:69] Setting storage-provisioner=true in profile "no-preload-816185"
	I1205 20:37:05.390216  585025 addons.go:69] Setting default-storageclass=true in profile "no-preload-816185"
	I1205 20:37:05.390246  585025 addons.go:69] Setting metrics-server=true in profile "no-preload-816185"
	I1205 20:37:05.390281  585025 addons.go:234] Setting addon metrics-server=true in "no-preload-816185"
	W1205 20:37:05.390295  585025 addons.go:243] addon metrics-server should already be in state true
	I1205 20:37:05.390340  585025 host.go:66] Checking if "no-preload-816185" exists ...
	I1205 20:37:05.390255  585025 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-816185"
	I1205 20:37:05.390263  585025 addons.go:234] Setting addon storage-provisioner=true in "no-preload-816185"
	W1205 20:37:05.390463  585025 addons.go:243] addon storage-provisioner should already be in state true
	I1205 20:37:05.390533  585025 host.go:66] Checking if "no-preload-816185" exists ...
	I1205 20:37:05.390844  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.390888  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.390852  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.390947  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.390973  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.391032  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.391810  585025 out.go:177] * Verifying Kubernetes components...
	I1205 20:37:05.393274  585025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:37:05.408078  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40259
	I1205 20:37:05.408366  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I1205 20:37:05.408765  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.408780  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.409315  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.409337  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.409441  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.409465  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.409767  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.409800  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.409941  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetState
	I1205 20:37:05.410249  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42147
	I1205 20:37:05.410487  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.410537  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.410753  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.411387  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.411412  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.411847  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.412515  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.412565  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.413770  585025 addons.go:234] Setting addon default-storageclass=true in "no-preload-816185"
	W1205 20:37:05.413796  585025 addons.go:243] addon default-storageclass should already be in state true
	I1205 20:37:05.413828  585025 host.go:66] Checking if "no-preload-816185" exists ...
	I1205 20:37:05.414184  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.414231  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.430214  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33287
	I1205 20:37:05.430684  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.431260  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.431286  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.431697  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.431929  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetState
	I1205 20:37:05.432941  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36939
	I1205 20:37:05.433361  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.433835  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.433855  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.433933  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:37:05.434385  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.434596  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetState
	I1205 20:37:05.434638  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37163
	I1205 20:37:05.435193  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.435667  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.435694  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.435994  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.436000  585025 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:37:05.436635  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.436657  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:37:05.436683  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.437421  585025 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:37:05.437441  585025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:37:05.437461  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:37:05.438221  585025 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:37:05.439704  585025 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:37:05.439721  585025 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:37:05.439737  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:37:05.440522  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.441031  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:37:05.441058  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.441198  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:37:05.441352  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:37:05.441458  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:37:05.441582  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:37:05.445842  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.446223  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:37:05.446248  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.446449  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:37:05.446661  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:37:05.446806  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:37:05.446923  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:37:05.472870  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38029
	I1205 20:37:05.473520  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.474053  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.474080  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.474456  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.474666  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetState
	I1205 20:37:05.476603  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:37:05.476836  585025 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:37:05.476859  585025 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:37:05.476886  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:37:05.480063  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.480546  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:37:05.480580  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.480941  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:37:05.481175  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:37:05.481331  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:37:05.481425  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:37:05.607284  585025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:37:05.627090  585025 node_ready.go:35] waiting up to 6m0s for node "no-preload-816185" to be "Ready" ...
	I1205 20:37:05.637577  585025 node_ready.go:49] node "no-preload-816185" has status "Ready":"True"
	I1205 20:37:05.637602  585025 node_ready.go:38] duration metric: took 10.476209ms for node "no-preload-816185" to be "Ready" ...
	I1205 20:37:05.637611  585025 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:37:05.642969  585025 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:05.696662  585025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:37:05.725276  585025 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:37:05.725309  585025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:37:05.779102  585025 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:37:05.779137  585025 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:37:05.814495  585025 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:37:05.814531  585025 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:37:05.823828  585025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:37:05.863152  585025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:37:05.948854  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:05.948895  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:05.949242  585025 main.go:141] libmachine: (no-preload-816185) DBG | Closing plugin on server side
	I1205 20:37:05.949266  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:05.949275  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:05.949294  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:05.949302  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:05.949590  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:05.949601  585025 main.go:141] libmachine: (no-preload-816185) DBG | Closing plugin on server side
	I1205 20:37:05.949612  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:05.975655  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:05.975683  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:05.975962  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:05.975978  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:07.004027  585025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.180164032s)
	I1205 20:37:07.004103  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:07.004117  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:07.004498  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:07.004520  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:07.004535  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:07.004545  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:07.004802  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:07.004820  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:07.208032  585025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.344819218s)
	I1205 20:37:07.208143  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:07.208159  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:07.208537  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:07.208556  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:07.208566  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:07.208573  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:07.208846  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:07.208860  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:07.208871  585025 addons.go:475] Verifying addon metrics-server=true in "no-preload-816185"
	I1205 20:37:07.210487  585025 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:37:07.212093  585025 addons.go:510] duration metric: took 1.822047986s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:37:07.658678  585025 pod_ready.go:103] pod "etcd-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:37:08.156061  585025 pod_ready.go:93] pod "etcd-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:08.156094  585025 pod_ready.go:82] duration metric: took 2.513098547s for pod "etcd-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:08.156109  585025 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:10.162704  585025 pod_ready.go:103] pod "kube-apiserver-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:37:12.163550  585025 pod_ready.go:93] pod "kube-apiserver-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:12.163578  585025 pod_ready.go:82] duration metric: took 4.007461295s for pod "kube-apiserver-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:12.163601  585025 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:12.169123  585025 pod_ready.go:93] pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:12.169155  585025 pod_ready.go:82] duration metric: took 5.544964ms for pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:12.169170  585025 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:14.175288  585025 pod_ready.go:103] pod "kube-scheduler-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:37:14.676107  585025 pod_ready.go:93] pod "kube-scheduler-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:14.676137  585025 pod_ready.go:82] duration metric: took 2.506959209s for pod "kube-scheduler-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:14.676146  585025 pod_ready.go:39] duration metric: took 9.038525731s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:37:14.676165  585025 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:37:14.676222  585025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:37:14.692508  585025 api_server.go:72] duration metric: took 9.302489277s to wait for apiserver process to appear ...
	I1205 20:37:14.692540  585025 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:37:14.692562  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:37:14.697176  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 200:
	ok
	I1205 20:37:14.698320  585025 api_server.go:141] control plane version: v1.31.2
	I1205 20:37:14.698345  585025 api_server.go:131] duration metric: took 5.796971ms to wait for apiserver health ...
	I1205 20:37:14.698357  585025 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:37:14.706456  585025 system_pods.go:59] 9 kube-system pods found
	I1205 20:37:14.706503  585025 system_pods.go:61] "coredns-7c65d6cfc9-fmcnh" [fb6a91c8-af65-4fb6-af77-0a6c45d224a7] Running
	I1205 20:37:14.706512  585025 system_pods.go:61] "coredns-7c65d6cfc9-gmc2j" [2bfc0f96-5ad3-42c7-ab2c-4a29cbeab20f] Running
	I1205 20:37:14.706518  585025 system_pods.go:61] "etcd-no-preload-816185" [b647e785-c865-47d9-9215-4b92783df8f0] Running
	I1205 20:37:14.706524  585025 system_pods.go:61] "kube-apiserver-no-preload-816185" [a4d257bd-3d3b-4833-9edd-7a7f764d9482] Running
	I1205 20:37:14.706529  585025 system_pods.go:61] "kube-controller-manager-no-preload-816185" [0487e25d-77df-4ab1-81a0-18c09d1b7f60] Running
	I1205 20:37:14.706534  585025 system_pods.go:61] "kube-proxy-q8thq" [8be5b50a-e564-4d80-82c4-357db41a3c1e] Running
	I1205 20:37:14.706539  585025 system_pods.go:61] "kube-scheduler-no-preload-816185" [187898da-a8e3-4ce1-9f70-d581133bef49] Running
	I1205 20:37:14.706549  585025 system_pods.go:61] "metrics-server-6867b74b74-8vmd6" [d838e6e3-bd74-4653-9289-4f5375b03d4f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:37:14.706555  585025 system_pods.go:61] "storage-provisioner" [7f33e249-9330-428f-8feb-9f3cf44369be] Running
	I1205 20:37:14.706565  585025 system_pods.go:74] duration metric: took 8.200516ms to wait for pod list to return data ...
	I1205 20:37:14.706577  585025 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:37:14.716217  585025 default_sa.go:45] found service account: "default"
	I1205 20:37:14.716259  585025 default_sa.go:55] duration metric: took 9.664045ms for default service account to be created ...
	I1205 20:37:14.716293  585025 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:37:14.723293  585025 system_pods.go:86] 9 kube-system pods found
	I1205 20:37:14.723323  585025 system_pods.go:89] "coredns-7c65d6cfc9-fmcnh" [fb6a91c8-af65-4fb6-af77-0a6c45d224a7] Running
	I1205 20:37:14.723329  585025 system_pods.go:89] "coredns-7c65d6cfc9-gmc2j" [2bfc0f96-5ad3-42c7-ab2c-4a29cbeab20f] Running
	I1205 20:37:14.723333  585025 system_pods.go:89] "etcd-no-preload-816185" [b647e785-c865-47d9-9215-4b92783df8f0] Running
	I1205 20:37:14.723337  585025 system_pods.go:89] "kube-apiserver-no-preload-816185" [a4d257bd-3d3b-4833-9edd-7a7f764d9482] Running
	I1205 20:37:14.723342  585025 system_pods.go:89] "kube-controller-manager-no-preload-816185" [0487e25d-77df-4ab1-81a0-18c09d1b7f60] Running
	I1205 20:37:14.723346  585025 system_pods.go:89] "kube-proxy-q8thq" [8be5b50a-e564-4d80-82c4-357db41a3c1e] Running
	I1205 20:37:14.723349  585025 system_pods.go:89] "kube-scheduler-no-preload-816185" [187898da-a8e3-4ce1-9f70-d581133bef49] Running
	I1205 20:37:14.723355  585025 system_pods.go:89] "metrics-server-6867b74b74-8vmd6" [d838e6e3-bd74-4653-9289-4f5375b03d4f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:37:14.723360  585025 system_pods.go:89] "storage-provisioner" [7f33e249-9330-428f-8feb-9f3cf44369be] Running
	I1205 20:37:14.723368  585025 system_pods.go:126] duration metric: took 7.067824ms to wait for k8s-apps to be running ...
	I1205 20:37:14.723375  585025 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:37:14.723422  585025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:37:14.744142  585025 system_svc.go:56] duration metric: took 20.751867ms WaitForService to wait for kubelet
	I1205 20:37:14.744179  585025 kubeadm.go:582] duration metric: took 9.354165706s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:37:14.744200  585025 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:37:14.751985  585025 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:37:14.752026  585025 node_conditions.go:123] node cpu capacity is 2
	I1205 20:37:14.752043  585025 node_conditions.go:105] duration metric: took 7.836665ms to run NodePressure ...
	I1205 20:37:14.752069  585025 start.go:241] waiting for startup goroutines ...
	I1205 20:37:14.752081  585025 start.go:246] waiting for cluster config update ...
	I1205 20:37:14.752095  585025 start.go:255] writing updated cluster config ...
	I1205 20:37:14.752490  585025 ssh_runner.go:195] Run: rm -f paused
	I1205 20:37:14.806583  585025 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 20:37:14.808574  585025 out.go:177] * Done! kubectl is now configured to use "no-preload-816185" cluster and "default" namespace by default
	I1205 20:37:17.029681  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:37:17.029940  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:37:17.029963  585602 kubeadm.go:310] 
	I1205 20:37:17.030022  585602 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 20:37:17.030101  585602 kubeadm.go:310] 		timed out waiting for the condition
	I1205 20:37:17.030128  585602 kubeadm.go:310] 
	I1205 20:37:17.030167  585602 kubeadm.go:310] 	This error is likely caused by:
	I1205 20:37:17.030209  585602 kubeadm.go:310] 		- The kubelet is not running
	I1205 20:37:17.030353  585602 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 20:37:17.030369  585602 kubeadm.go:310] 
	I1205 20:37:17.030489  585602 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 20:37:17.030540  585602 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 20:37:17.030584  585602 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 20:37:17.030594  585602 kubeadm.go:310] 
	I1205 20:37:17.030733  585602 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 20:37:17.030843  585602 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 20:37:17.030855  585602 kubeadm.go:310] 
	I1205 20:37:17.031025  585602 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 20:37:17.031154  585602 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 20:37:17.031268  585602 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 20:37:17.031374  585602 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 20:37:17.031386  585602 kubeadm.go:310] 
	I1205 20:37:17.032368  585602 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:37:17.032493  585602 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 20:37:17.032562  585602 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1205 20:37:17.032709  585602 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 20:37:17.032762  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:37:17.518572  585602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:37:17.533868  585602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:37:17.547199  585602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:37:17.547224  585602 kubeadm.go:157] found existing configuration files:
	
	I1205 20:37:17.547272  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:37:17.556733  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:37:17.556801  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:37:17.566622  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:37:17.577044  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:37:17.577121  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:37:17.588726  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:37:17.599269  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:37:17.599346  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:37:17.609243  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:37:17.618947  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:37:17.619034  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:37:17.629228  585602 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:37:17.878785  585602 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:39:13.972213  585602 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 20:39:13.972379  585602 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1205 20:39:13.973936  585602 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 20:39:13.974035  585602 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:39:13.974150  585602 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:39:13.974251  585602 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:39:13.974341  585602 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:39:13.974404  585602 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:39:13.976164  585602 out.go:235]   - Generating certificates and keys ...
	I1205 20:39:13.976248  585602 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:39:13.976339  585602 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:39:13.976449  585602 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:39:13.976538  585602 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:39:13.976642  585602 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:39:13.976736  585602 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 20:39:13.976832  585602 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:39:13.976924  585602 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:39:13.977025  585602 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:39:13.977131  585602 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:39:13.977189  585602 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 20:39:13.977272  585602 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:39:13.977389  585602 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:39:13.977474  585602 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:39:13.977566  585602 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:39:13.977650  585602 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:39:13.977776  585602 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:39:13.977901  585602 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:39:13.977976  585602 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:39:13.978137  585602 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:39:13.979473  585602 out.go:235]   - Booting up control plane ...
	I1205 20:39:13.979581  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:39:13.979664  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:39:13.979732  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:39:13.979803  585602 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:39:13.979952  585602 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:39:13.980017  585602 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 20:39:13.980107  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.980396  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.980511  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.980744  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.980843  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.981116  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.981227  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.981439  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.981528  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.981718  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.981731  585602 kubeadm.go:310] 
	I1205 20:39:13.981773  585602 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 20:39:13.981831  585602 kubeadm.go:310] 		timed out waiting for the condition
	I1205 20:39:13.981839  585602 kubeadm.go:310] 
	I1205 20:39:13.981888  585602 kubeadm.go:310] 	This error is likely caused by:
	I1205 20:39:13.981941  585602 kubeadm.go:310] 		- The kubelet is not running
	I1205 20:39:13.982052  585602 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 20:39:13.982059  585602 kubeadm.go:310] 
	I1205 20:39:13.982144  585602 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 20:39:13.982174  585602 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 20:39:13.982208  585602 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 20:39:13.982215  585602 kubeadm.go:310] 
	I1205 20:39:13.982302  585602 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 20:39:13.982415  585602 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 20:39:13.982431  585602 kubeadm.go:310] 
	I1205 20:39:13.982540  585602 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 20:39:13.982618  585602 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 20:39:13.982701  585602 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 20:39:13.982766  585602 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 20:39:13.982839  585602 kubeadm.go:310] 
	I1205 20:39:13.982855  585602 kubeadm.go:394] duration metric: took 7m58.414377536s to StartCluster
	I1205 20:39:13.982907  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:39:13.982975  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:39:14.031730  585602 cri.go:89] found id: ""
	I1205 20:39:14.031767  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.031779  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:39:14.031791  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:39:14.031865  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:39:14.068372  585602 cri.go:89] found id: ""
	I1205 20:39:14.068420  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.068433  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:39:14.068440  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:39:14.068512  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:39:14.106807  585602 cri.go:89] found id: ""
	I1205 20:39:14.106837  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.106847  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:39:14.106856  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:39:14.106930  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:39:14.144926  585602 cri.go:89] found id: ""
	I1205 20:39:14.144952  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.144960  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:39:14.144974  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:39:14.145052  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:39:14.182712  585602 cri.go:89] found id: ""
	I1205 20:39:14.182742  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.182754  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:39:14.182762  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:39:14.182826  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:39:14.220469  585602 cri.go:89] found id: ""
	I1205 20:39:14.220505  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.220519  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:39:14.220527  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:39:14.220593  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:39:14.269791  585602 cri.go:89] found id: ""
	I1205 20:39:14.269823  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.269835  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:39:14.269842  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:39:14.269911  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:39:14.313406  585602 cri.go:89] found id: ""
	I1205 20:39:14.313439  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.313450  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:39:14.313464  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:39:14.313483  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:39:14.330488  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:39:14.330526  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:39:14.417358  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:39:14.417403  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:39:14.417421  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:39:14.530226  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:39:14.530270  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:39:14.585471  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:39:14.585512  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 20:39:14.636389  585602 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1205 20:39:14.636456  585602 out.go:270] * 
	W1205 20:39:14.636535  585602 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 20:39:14.636549  585602 out.go:270] * 
	W1205 20:39:14.637475  585602 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 20:39:14.640654  585602 out.go:201] 
	W1205 20:39:14.641873  585602 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 20:39:14.641931  585602 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 20:39:14.641975  585602 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 20:39:14.643389  585602 out.go:201] 
	
	
	==> CRI-O <==
	Dec 05 20:39:16 old-k8s-version-386085 crio[629]: time="2024-12-05 20:39:16.510926059Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431156510907166,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a3d520fe-fdea-4099-95de-dc3c7a062f42 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:39:16 old-k8s-version-386085 crio[629]: time="2024-12-05 20:39:16.511545642Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60666b05-8c33-4562-8237-d57b577cb6ec name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:39:16 old-k8s-version-386085 crio[629]: time="2024-12-05 20:39:16.511621441Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60666b05-8c33-4562-8237-d57b577cb6ec name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:39:16 old-k8s-version-386085 crio[629]: time="2024-12-05 20:39:16.511658560Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=60666b05-8c33-4562-8237-d57b577cb6ec name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:39:16 old-k8s-version-386085 crio[629]: time="2024-12-05 20:39:16.548881602Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8fd6abf6-f952-4df5-9ad5-01da0150e5dd name=/runtime.v1.RuntimeService/Version
	Dec 05 20:39:16 old-k8s-version-386085 crio[629]: time="2024-12-05 20:39:16.549017516Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8fd6abf6-f952-4df5-9ad5-01da0150e5dd name=/runtime.v1.RuntimeService/Version
	Dec 05 20:39:16 old-k8s-version-386085 crio[629]: time="2024-12-05 20:39:16.550417778Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=860499cb-b8e6-405f-8c33-e1bd82e29a04 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:39:16 old-k8s-version-386085 crio[629]: time="2024-12-05 20:39:16.550803564Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431156550783095,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=860499cb-b8e6-405f-8c33-e1bd82e29a04 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:39:16 old-k8s-version-386085 crio[629]: time="2024-12-05 20:39:16.551534749Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d258b75c-d876-4b33-8ce3-4954697473ad name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:39:16 old-k8s-version-386085 crio[629]: time="2024-12-05 20:39:16.551629973Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d258b75c-d876-4b33-8ce3-4954697473ad name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:39:16 old-k8s-version-386085 crio[629]: time="2024-12-05 20:39:16.551683810Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d258b75c-d876-4b33-8ce3-4954697473ad name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:39:16 old-k8s-version-386085 crio[629]: time="2024-12-05 20:39:16.585941966Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1d5e34fa-13ee-429a-befc-f7078f063bdf name=/runtime.v1.RuntimeService/Version
	Dec 05 20:39:16 old-k8s-version-386085 crio[629]: time="2024-12-05 20:39:16.586074535Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1d5e34fa-13ee-429a-befc-f7078f063bdf name=/runtime.v1.RuntimeService/Version
	Dec 05 20:39:16 old-k8s-version-386085 crio[629]: time="2024-12-05 20:39:16.587382852Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5cf45aac-f348-44a2-b6c4-59e79bc2fda5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:39:16 old-k8s-version-386085 crio[629]: time="2024-12-05 20:39:16.587816004Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431156587788546,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5cf45aac-f348-44a2-b6c4-59e79bc2fda5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:39:16 old-k8s-version-386085 crio[629]: time="2024-12-05 20:39:16.588437083Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e2476e2d-8fc9-4bb2-bbbc-408defd7c61c name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:39:16 old-k8s-version-386085 crio[629]: time="2024-12-05 20:39:16.588535834Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e2476e2d-8fc9-4bb2-bbbc-408defd7c61c name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:39:16 old-k8s-version-386085 crio[629]: time="2024-12-05 20:39:16.588613221Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e2476e2d-8fc9-4bb2-bbbc-408defd7c61c name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:39:16 old-k8s-version-386085 crio[629]: time="2024-12-05 20:39:16.621666587Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a1bedb97-52da-432e-bc05-8c42035d8fa4 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:39:16 old-k8s-version-386085 crio[629]: time="2024-12-05 20:39:16.621760717Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a1bedb97-52da-432e-bc05-8c42035d8fa4 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:39:16 old-k8s-version-386085 crio[629]: time="2024-12-05 20:39:16.623101860Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d87abbcf-2499-4ce5-89e3-ee1c708981bf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:39:16 old-k8s-version-386085 crio[629]: time="2024-12-05 20:39:16.623497496Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431156623469974,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d87abbcf-2499-4ce5-89e3-ee1c708981bf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:39:16 old-k8s-version-386085 crio[629]: time="2024-12-05 20:39:16.624115570Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa94075e-aa49-4def-84d3-0c8500d32120 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:39:16 old-k8s-version-386085 crio[629]: time="2024-12-05 20:39:16.624200508Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa94075e-aa49-4def-84d3-0c8500d32120 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:39:16 old-k8s-version-386085 crio[629]: time="2024-12-05 20:39:16.624236788Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fa94075e-aa49-4def-84d3-0c8500d32120 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 20:30] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053859] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.048232] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.156020] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.849389] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.680157] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec 5 20:31] systemd-fstab-generator[557]: Ignoring "noauto" option for root device
	[  +0.058081] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059601] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.177616] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.149980] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.257256] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +6.927159] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.062736] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.953352] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[  +9.534888] kauditd_printk_skb: 46 callbacks suppressed
	[Dec 5 20:35] systemd-fstab-generator[5061]: Ignoring "noauto" option for root device
	[Dec 5 20:37] systemd-fstab-generator[5344]: Ignoring "noauto" option for root device
	[  +0.073876] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:39:16 up 8 min,  0 users,  load average: 0.06, 0.14, 0.09
	Linux old-k8s-version-386085 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 05 20:39:13 old-k8s-version-386085 kubelet[5519]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc00087e8c0)
	Dec 05 20:39:13 old-k8s-version-386085 kubelet[5519]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Dec 05 20:39:13 old-k8s-version-386085 kubelet[5519]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Dec 05 20:39:13 old-k8s-version-386085 kubelet[5519]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Dec 05 20:39:13 old-k8s-version-386085 kubelet[5519]: goroutine 146 [runnable]:
	Dec 05 20:39:13 old-k8s-version-386085 kubelet[5519]: net._C2func_getaddrinfo(0xc000c8a280, 0x0, 0xc0006f5ad0, 0xc0006fa100, 0x0, 0x0, 0x0)
	Dec 05 20:39:13 old-k8s-version-386085 kubelet[5519]:         _cgo_gotypes.go:94 +0x55
	Dec 05 20:39:13 old-k8s-version-386085 kubelet[5519]: net.cgoLookupIPCNAME.func1(0xc000c8a280, 0x20, 0x20, 0xc0006f5ad0, 0xc0006fa100, 0x0, 0xc0006306a0, 0x57a492)
	Dec 05 20:39:13 old-k8s-version-386085 kubelet[5519]:         /usr/local/go/src/net/cgo_unix.go:161 +0xc5
	Dec 05 20:39:13 old-k8s-version-386085 kubelet[5519]: net.cgoLookupIPCNAME(0x48ab5d6, 0x3, 0xc000ca2570, 0x1f, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Dec 05 20:39:13 old-k8s-version-386085 kubelet[5519]:         /usr/local/go/src/net/cgo_unix.go:161 +0x16b
	Dec 05 20:39:13 old-k8s-version-386085 kubelet[5519]: net.cgoIPLookup(0xc000d3df20, 0x48ab5d6, 0x3, 0xc000ca2570, 0x1f)
	Dec 05 20:39:13 old-k8s-version-386085 kubelet[5519]:         /usr/local/go/src/net/cgo_unix.go:218 +0x67
	Dec 05 20:39:13 old-k8s-version-386085 kubelet[5519]: created by net.cgoLookupIP
	Dec 05 20:39:13 old-k8s-version-386085 kubelet[5519]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Dec 05 20:39:13 old-k8s-version-386085 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Dec 05 20:39:13 old-k8s-version-386085 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 20:39:14 old-k8s-version-386085 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Dec 05 20:39:14 old-k8s-version-386085 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 05 20:39:14 old-k8s-version-386085 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 05 20:39:14 old-k8s-version-386085 kubelet[5576]: I1205 20:39:14.533618    5576 server.go:416] Version: v1.20.0
	Dec 05 20:39:14 old-k8s-version-386085 kubelet[5576]: I1205 20:39:14.534151    5576 server.go:837] Client rotation is on, will bootstrap in background
	Dec 05 20:39:14 old-k8s-version-386085 kubelet[5576]: I1205 20:39:14.537915    5576 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Dec 05 20:39:14 old-k8s-version-386085 kubelet[5576]: W1205 20:39:14.539246    5576 manager.go:159] Cannot detect current cgroup on cgroup v2
	Dec 05 20:39:14 old-k8s-version-386085 kubelet[5576]: I1205 20:39:14.541921    5576 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-386085 -n old-k8s-version-386085
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-386085 -n old-k8s-version-386085: exit status 2 (238.653102ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-386085" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (726.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-942599 -n default-k8s-diff-port-942599
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-942599 -n default-k8s-diff-port-942599: exit status 3 (3.200011852s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:27:53.772733  585802 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.96:22: connect: no route to host
	E1205 20:27:53.772759  585802 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.96:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-942599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-942599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153615864s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.96:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-942599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-942599 -n default-k8s-diff-port-942599
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-942599 -n default-k8s-diff-port-942599: exit status 3 (3.061786161s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:28:02.988708  585882 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.96:22: connect: no route to host
	E1205 20:28:02.988735  585882 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.96:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-942599" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-789000 -n embed-certs-789000
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-12-05 20:44:58.782463364 +0000 UTC m=+6190.988083701
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-789000 -n embed-certs-789000
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-789000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-789000 logs -n 25: (2.314796636s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-790679 -- sudo                         | cert-options-790679          | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:21 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-790679                                 | cert-options-790679          | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:21 UTC |
	| start   | -p no-preload-816185                                   | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-886958                           | kubernetes-upgrade-886958    | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:21 UTC |
	| start   | -p embed-certs-789000                                  | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-816185             | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-816185                                   | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-789000            | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-789000                                  | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-315387                              | cert-expiration-315387       | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-315387                              | cert-expiration-315387       | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	| delete  | -p                                                     | disable-driver-mounts-242147 | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	|         | disable-driver-mounts-242147                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:25 UTC |
	|         | default-k8s-diff-port-942599                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-386085        | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-942599  | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC | 05 Dec 24 20:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC |                     |
	|         | default-k8s-diff-port-942599                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-816185                  | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-789000                 | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-816185                                   | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC | 05 Dec 24 20:37 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-789000                                  | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC | 05 Dec 24 20:35 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-386085                              | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-386085             | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-386085                              | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-942599       | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:28 UTC | 05 Dec 24 20:36 UTC |
	|         | default-k8s-diff-port-942599                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 20:28:03
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:28:03.038037  585929 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:28:03.038168  585929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:28:03.038178  585929 out.go:358] Setting ErrFile to fd 2...
	I1205 20:28:03.038185  585929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:28:03.038375  585929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 20:28:03.038955  585929 out.go:352] Setting JSON to false
	I1205 20:28:03.039948  585929 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":11429,"bootTime":1733419054,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:28:03.040015  585929 start.go:139] virtualization: kvm guest
	I1205 20:28:03.042326  585929 out.go:177] * [default-k8s-diff-port-942599] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:28:03.044291  585929 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 20:28:03.044320  585929 notify.go:220] Checking for updates...
	I1205 20:28:03.047072  585929 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:28:03.048480  585929 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:28:03.049796  585929 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 20:28:03.051035  585929 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:28:03.052263  585929 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:28:03.054167  585929 config.go:182] Loaded profile config "default-k8s-diff-port-942599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:28:03.054665  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:28:03.054749  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:28:03.070361  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33501
	I1205 20:28:03.070891  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:28:03.071534  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:28:03.071563  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:28:03.071995  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:28:03.072285  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:28:03.072587  585929 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:28:03.072920  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:28:03.072968  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:28:03.088186  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38669
	I1205 20:28:03.088660  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:28:03.089202  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:28:03.089224  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:28:03.089542  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:28:03.089782  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:28:03.122562  585929 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 20:28:03.123970  585929 start.go:297] selected driver: kvm2
	I1205 20:28:03.123992  585929 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-942599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-942599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.96 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:28:03.124128  585929 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:28:03.125014  585929 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:28:03.125111  585929 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20052-530897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:28:03.140461  585929 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 20:28:03.140904  585929 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:28:03.140943  585929 cni.go:84] Creating CNI manager for ""
	I1205 20:28:03.141015  585929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:28:03.141067  585929 start.go:340] cluster config:
	{Name:default-k8s-diff-port-942599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-942599 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.96 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:28:03.141179  585929 iso.go:125] acquiring lock: {Name:mk778929df466edaca8cb6d38427acedfae32b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:28:03.144215  585929 out.go:177] * Starting "default-k8s-diff-port-942599" primary control-plane node in "default-k8s-diff-port-942599" cluster
	I1205 20:28:03.276565  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:03.145620  585929 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:28:03.145661  585929 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 20:28:03.145676  585929 cache.go:56] Caching tarball of preloaded images
	I1205 20:28:03.145844  585929 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:28:03.145864  585929 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 20:28:03.146005  585929 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/config.json ...
	I1205 20:28:03.146240  585929 start.go:360] acquireMachinesLock for default-k8s-diff-port-942599: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:28:06.348547  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:12.428620  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:15.500614  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:21.580587  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:24.652618  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:30.732598  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:33.804612  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:39.884624  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:42.956577  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:49.036617  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:52.108607  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:58.188605  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:01.260573  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:07.340591  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:10.412578  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:16.492574  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:19.564578  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:25.644591  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:28.716619  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:34.796609  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:37.868605  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:43.948594  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:47.020553  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:53.100499  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:56.172560  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:02.252612  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:05.324648  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:11.404563  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:14.476553  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:20.556568  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:23.561620  585113 start.go:364] duration metric: took 4m32.790399884s to acquireMachinesLock for "embed-certs-789000"
	I1205 20:30:23.561696  585113 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:30:23.561711  585113 fix.go:54] fixHost starting: 
	I1205 20:30:23.562327  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:30:23.562400  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:30:23.578260  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38555
	I1205 20:30:23.578843  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:30:23.579379  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:30:23.579405  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:30:23.579776  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:30:23.580051  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:23.580222  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetState
	I1205 20:30:23.582161  585113 fix.go:112] recreateIfNeeded on embed-certs-789000: state=Stopped err=<nil>
	I1205 20:30:23.582190  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	W1205 20:30:23.582386  585113 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 20:30:23.584585  585113 out.go:177] * Restarting existing kvm2 VM for "embed-certs-789000" ...
	I1205 20:30:23.586583  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Start
	I1205 20:30:23.586835  585113 main.go:141] libmachine: (embed-certs-789000) Ensuring networks are active...
	I1205 20:30:23.587628  585113 main.go:141] libmachine: (embed-certs-789000) Ensuring network default is active
	I1205 20:30:23.587937  585113 main.go:141] libmachine: (embed-certs-789000) Ensuring network mk-embed-certs-789000 is active
	I1205 20:30:23.588228  585113 main.go:141] libmachine: (embed-certs-789000) Getting domain xml...
	I1205 20:30:23.588898  585113 main.go:141] libmachine: (embed-certs-789000) Creating domain...
	I1205 20:30:24.829936  585113 main.go:141] libmachine: (embed-certs-789000) Waiting to get IP...
	I1205 20:30:24.830897  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:24.831398  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:24.831465  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:24.831364  586433 retry.go:31] will retry after 208.795355ms: waiting for machine to come up
	I1205 20:30:25.042078  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:25.042657  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:25.042689  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:25.042599  586433 retry.go:31] will retry after 385.313968ms: waiting for machine to come up
	I1205 20:30:25.429439  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:25.429877  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:25.429913  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:25.429811  586433 retry.go:31] will retry after 432.591358ms: waiting for machine to come up
	I1205 20:30:23.558453  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:30:23.558508  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetMachineName
	I1205 20:30:23.558905  585025 buildroot.go:166] provisioning hostname "no-preload-816185"
	I1205 20:30:23.558943  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetMachineName
	I1205 20:30:23.559166  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:30:23.561471  585025 machine.go:96] duration metric: took 4m37.380964872s to provisionDockerMachine
	I1205 20:30:23.561518  585025 fix.go:56] duration metric: took 4m37.403172024s for fixHost
	I1205 20:30:23.561524  585025 start.go:83] releasing machines lock for "no-preload-816185", held for 4m37.40319095s
	W1205 20:30:23.561546  585025 start.go:714] error starting host: provision: host is not running
	W1205 20:30:23.561677  585025 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1205 20:30:23.561688  585025 start.go:729] Will try again in 5 seconds ...
	I1205 20:30:25.864656  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:25.865217  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:25.865255  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:25.865138  586433 retry.go:31] will retry after 571.148349ms: waiting for machine to come up
	I1205 20:30:26.437644  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:26.438220  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:26.438250  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:26.438165  586433 retry.go:31] will retry after 585.234455ms: waiting for machine to come up
	I1205 20:30:27.025107  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:27.025510  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:27.025538  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:27.025459  586433 retry.go:31] will retry after 648.291531ms: waiting for machine to come up
	I1205 20:30:27.675457  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:27.675898  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:27.675928  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:27.675838  586433 retry.go:31] will retry after 804.071148ms: waiting for machine to come up
	I1205 20:30:28.481966  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:28.482386  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:28.482416  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:28.482329  586433 retry.go:31] will retry after 905.207403ms: waiting for machine to come up
	I1205 20:30:29.388933  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:29.389546  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:29.389571  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:29.389484  586433 retry.go:31] will retry after 1.48894232s: waiting for machine to come up
	I1205 20:30:28.562678  585025 start.go:360] acquireMachinesLock for no-preload-816185: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:30:30.880218  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:30.880742  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:30.880773  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:30.880685  586433 retry.go:31] will retry after 2.314200549s: waiting for machine to come up
	I1205 20:30:33.198477  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:33.198998  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:33.199029  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:33.198945  586433 retry.go:31] will retry after 1.922541264s: waiting for machine to come up
	I1205 20:30:35.123922  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:35.124579  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:35.124607  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:35.124524  586433 retry.go:31] will retry after 3.537087912s: waiting for machine to come up
	I1205 20:30:38.662839  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:38.663212  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:38.663250  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:38.663160  586433 retry.go:31] will retry after 3.371938424s: waiting for machine to come up
	I1205 20:30:43.457332  585602 start.go:364] duration metric: took 3m31.488905557s to acquireMachinesLock for "old-k8s-version-386085"
	I1205 20:30:43.457418  585602 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:30:43.457427  585602 fix.go:54] fixHost starting: 
	I1205 20:30:43.457835  585602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:30:43.457891  585602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:30:43.474845  585602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33571
	I1205 20:30:43.475386  585602 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:30:43.475993  585602 main.go:141] libmachine: Using API Version  1
	I1205 20:30:43.476026  585602 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:30:43.476404  585602 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:30:43.476613  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:30:43.476778  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetState
	I1205 20:30:43.478300  585602 fix.go:112] recreateIfNeeded on old-k8s-version-386085: state=Stopped err=<nil>
	I1205 20:30:43.478329  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	W1205 20:30:43.478502  585602 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 20:30:43.480644  585602 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-386085" ...
	I1205 20:30:42.038738  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.039204  585113 main.go:141] libmachine: (embed-certs-789000) Found IP for machine: 192.168.39.200
	I1205 20:30:42.039235  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has current primary IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.039244  585113 main.go:141] libmachine: (embed-certs-789000) Reserving static IP address...
	I1205 20:30:42.039760  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "embed-certs-789000", mac: "52:54:00:48:ae:b2", ip: "192.168.39.200"} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.039806  585113 main.go:141] libmachine: (embed-certs-789000) DBG | skip adding static IP to network mk-embed-certs-789000 - found existing host DHCP lease matching {name: "embed-certs-789000", mac: "52:54:00:48:ae:b2", ip: "192.168.39.200"}
	I1205 20:30:42.039819  585113 main.go:141] libmachine: (embed-certs-789000) Reserved static IP address: 192.168.39.200
	I1205 20:30:42.039835  585113 main.go:141] libmachine: (embed-certs-789000) Waiting for SSH to be available...
	I1205 20:30:42.039843  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Getting to WaitForSSH function...
	I1205 20:30:42.042013  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.042352  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.042386  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.042542  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Using SSH client type: external
	I1205 20:30:42.042562  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa (-rw-------)
	I1205 20:30:42.042586  585113 main.go:141] libmachine: (embed-certs-789000) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.200 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:30:42.042595  585113 main.go:141] libmachine: (embed-certs-789000) DBG | About to run SSH command:
	I1205 20:30:42.042603  585113 main.go:141] libmachine: (embed-certs-789000) DBG | exit 0
	I1205 20:30:42.168573  585113 main.go:141] libmachine: (embed-certs-789000) DBG | SSH cmd err, output: <nil>: 
	I1205 20:30:42.168960  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetConfigRaw
	I1205 20:30:42.169783  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetIP
	I1205 20:30:42.172396  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.172790  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.172818  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.173023  585113 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/config.json ...
	I1205 20:30:42.173214  585113 machine.go:93] provisionDockerMachine start ...
	I1205 20:30:42.173234  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:42.173465  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.175399  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.175754  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.175785  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.175885  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:42.176063  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.176208  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.176412  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:42.176583  585113 main.go:141] libmachine: Using SSH client type: native
	I1205 20:30:42.176816  585113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1205 20:30:42.176830  585113 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:30:42.280829  585113 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 20:30:42.280861  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetMachineName
	I1205 20:30:42.281135  585113 buildroot.go:166] provisioning hostname "embed-certs-789000"
	I1205 20:30:42.281168  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetMachineName
	I1205 20:30:42.281409  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.284355  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.284692  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.284723  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.284817  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:42.285019  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.285185  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.285338  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:42.285511  585113 main.go:141] libmachine: Using SSH client type: native
	I1205 20:30:42.285716  585113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1205 20:30:42.285730  585113 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-789000 && echo "embed-certs-789000" | sudo tee /etc/hostname
	I1205 20:30:42.409310  585113 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-789000
	
	I1205 20:30:42.409370  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.412182  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.412524  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.412566  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.412779  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:42.412989  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.413137  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.413278  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:42.413468  585113 main.go:141] libmachine: Using SSH client type: native
	I1205 20:30:42.413674  585113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1205 20:30:42.413690  585113 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-789000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-789000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-789000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:30:42.529773  585113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:30:42.529806  585113 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:30:42.529829  585113 buildroot.go:174] setting up certificates
	I1205 20:30:42.529841  585113 provision.go:84] configureAuth start
	I1205 20:30:42.529850  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetMachineName
	I1205 20:30:42.530201  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetIP
	I1205 20:30:42.533115  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.533527  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.533558  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.533753  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.535921  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.536310  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.536339  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.536518  585113 provision.go:143] copyHostCerts
	I1205 20:30:42.536610  585113 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:30:42.536631  585113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:30:42.536698  585113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:30:42.536793  585113 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:30:42.536802  585113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:30:42.536826  585113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:30:42.536880  585113 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:30:42.536887  585113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:30:42.536908  585113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:30:42.536956  585113 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.embed-certs-789000 san=[127.0.0.1 192.168.39.200 embed-certs-789000 localhost minikube]
	I1205 20:30:42.832543  585113 provision.go:177] copyRemoteCerts
	I1205 20:30:42.832610  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:30:42.832640  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.835403  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.835669  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.835701  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.835848  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:42.836027  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.836161  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:42.836314  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:30:42.918661  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:30:42.943903  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1205 20:30:42.968233  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:30:42.993174  585113 provision.go:87] duration metric: took 463.317149ms to configureAuth
	I1205 20:30:42.993249  585113 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:30:42.993449  585113 config.go:182] Loaded profile config "embed-certs-789000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:30:42.993554  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.996211  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.996637  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.996696  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.996841  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:42.997049  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.997196  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.997305  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:42.997458  585113 main.go:141] libmachine: Using SSH client type: native
	I1205 20:30:42.997641  585113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1205 20:30:42.997656  585113 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:30:43.220096  585113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:30:43.220127  585113 machine.go:96] duration metric: took 1.046899757s to provisionDockerMachine
	I1205 20:30:43.220141  585113 start.go:293] postStartSetup for "embed-certs-789000" (driver="kvm2")
	I1205 20:30:43.220152  585113 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:30:43.220176  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:43.220544  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:30:43.220584  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:43.223481  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.223860  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:43.223889  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.224102  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:43.224316  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:43.224483  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:43.224667  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:30:43.307878  585113 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:30:43.312875  585113 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:30:43.312905  585113 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:30:43.312981  585113 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:30:43.313058  585113 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:30:43.313169  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:30:43.323221  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:30:43.347978  585113 start.go:296] duration metric: took 127.819083ms for postStartSetup
	I1205 20:30:43.348023  585113 fix.go:56] duration metric: took 19.786318897s for fixHost
	I1205 20:30:43.348046  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:43.350639  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.351004  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:43.351026  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.351247  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:43.351478  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:43.351642  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:43.351803  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:43.351950  585113 main.go:141] libmachine: Using SSH client type: native
	I1205 20:30:43.352122  585113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1205 20:30:43.352133  585113 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:30:43.457130  585113 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430643.415370749
	
	I1205 20:30:43.457164  585113 fix.go:216] guest clock: 1733430643.415370749
	I1205 20:30:43.457176  585113 fix.go:229] Guest: 2024-12-05 20:30:43.415370749 +0000 UTC Remote: 2024-12-05 20:30:43.34802793 +0000 UTC m=+292.733798952 (delta=67.342819ms)
	I1205 20:30:43.457209  585113 fix.go:200] guest clock delta is within tolerance: 67.342819ms
	I1205 20:30:43.457217  585113 start.go:83] releasing machines lock for "embed-certs-789000", held for 19.895543311s
	I1205 20:30:43.457251  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:43.457563  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetIP
	I1205 20:30:43.460628  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.461002  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:43.461042  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.461175  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:43.461758  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:43.461937  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:43.462067  585113 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:30:43.462120  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:43.462147  585113 ssh_runner.go:195] Run: cat /version.json
	I1205 20:30:43.462169  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:43.464859  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.465147  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.465237  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:43.465264  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.465409  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:43.465472  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:43.465497  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.465589  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:43.465711  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:43.465768  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:43.465863  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:43.465907  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:30:43.466006  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:43.466129  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:30:43.568909  585113 ssh_runner.go:195] Run: systemctl --version
	I1205 20:30:43.575175  585113 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:30:43.725214  585113 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:30:43.732226  585113 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:30:43.732369  585113 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:30:43.750186  585113 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:30:43.750223  585113 start.go:495] detecting cgroup driver to use...
	I1205 20:30:43.750296  585113 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:30:43.767876  585113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:30:43.783386  585113 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:30:43.783465  585113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:30:43.799917  585113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:30:43.815607  585113 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:30:43.935150  585113 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:30:44.094292  585113 docker.go:233] disabling docker service ...
	I1205 20:30:44.094378  585113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:30:44.111307  585113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:30:44.127528  585113 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:30:44.284496  585113 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:30:44.422961  585113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:30:44.439104  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:30:44.461721  585113 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:30:44.461787  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.476398  585113 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:30:44.476463  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.489821  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.502250  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.514245  585113 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:30:44.528227  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.540205  585113 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.559447  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.571434  585113 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:30:44.583635  585113 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:30:44.583717  585113 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:30:44.600954  585113 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:30:44.613381  585113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:30:44.733592  585113 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:30:44.843948  585113 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:30:44.844036  585113 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:30:44.849215  585113 start.go:563] Will wait 60s for crictl version
	I1205 20:30:44.849275  585113 ssh_runner.go:195] Run: which crictl
	I1205 20:30:44.853481  585113 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:30:44.900488  585113 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:30:44.900583  585113 ssh_runner.go:195] Run: crio --version
	I1205 20:30:44.944771  585113 ssh_runner.go:195] Run: crio --version
	I1205 20:30:44.977119  585113 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:30:44.978527  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetIP
	I1205 20:30:44.981609  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:44.982001  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:44.982037  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:44.982240  585113 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:30:44.986979  585113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:30:45.001779  585113 kubeadm.go:883] updating cluster {Name:embed-certs-789000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-789000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:30:45.001935  585113 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:30:45.002021  585113 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:30:45.041827  585113 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 20:30:45.041918  585113 ssh_runner.go:195] Run: which lz4
	I1205 20:30:45.046336  585113 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 20:30:45.050804  585113 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:30:45.050852  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 20:30:43.482307  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .Start
	I1205 20:30:43.482501  585602 main.go:141] libmachine: (old-k8s-version-386085) Ensuring networks are active...
	I1205 20:30:43.483222  585602 main.go:141] libmachine: (old-k8s-version-386085) Ensuring network default is active
	I1205 20:30:43.483574  585602 main.go:141] libmachine: (old-k8s-version-386085) Ensuring network mk-old-k8s-version-386085 is active
	I1205 20:30:43.484156  585602 main.go:141] libmachine: (old-k8s-version-386085) Getting domain xml...
	I1205 20:30:43.485045  585602 main.go:141] libmachine: (old-k8s-version-386085) Creating domain...
	I1205 20:30:44.770817  585602 main.go:141] libmachine: (old-k8s-version-386085) Waiting to get IP...
	I1205 20:30:44.772079  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:44.772538  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:44.772599  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:44.772517  586577 retry.go:31] will retry after 247.056435ms: waiting for machine to come up
	I1205 20:30:45.021096  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:45.021642  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:45.021678  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:45.021560  586577 retry.go:31] will retry after 241.543543ms: waiting for machine to come up
	I1205 20:30:45.265136  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:45.265654  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:45.265683  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:45.265596  586577 retry.go:31] will retry after 324.624293ms: waiting for machine to come up
	I1205 20:30:45.592067  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:45.592603  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:45.592636  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:45.592558  586577 retry.go:31] will retry after 408.275958ms: waiting for machine to come up
	I1205 20:30:46.002321  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:46.002872  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:46.002904  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:46.002808  586577 retry.go:31] will retry after 693.356488ms: waiting for machine to come up
	I1205 20:30:46.697505  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:46.697874  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:46.697900  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:46.697846  586577 retry.go:31] will retry after 906.807324ms: waiting for machine to come up
	I1205 20:30:46.612504  585113 crio.go:462] duration metric: took 1.56620974s to copy over tarball
	I1205 20:30:46.612585  585113 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:30:48.868826  585113 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256202653s)
	I1205 20:30:48.868863  585113 crio.go:469] duration metric: took 2.256329112s to extract the tarball
	I1205 20:30:48.868873  585113 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:30:48.906872  585113 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:30:48.955442  585113 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:30:48.955468  585113 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:30:48.955477  585113 kubeadm.go:934] updating node { 192.168.39.200 8443 v1.31.2 crio true true} ...
	I1205 20:30:48.955603  585113 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-789000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-789000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:30:48.955668  585113 ssh_runner.go:195] Run: crio config
	I1205 20:30:49.007389  585113 cni.go:84] Creating CNI manager for ""
	I1205 20:30:49.007419  585113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:30:49.007433  585113 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:30:49.007473  585113 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.200 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-789000 NodeName:embed-certs-789000 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:30:49.007656  585113 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-789000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.200"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.200"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:30:49.007734  585113 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:30:49.021862  585113 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:30:49.021949  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:30:49.032937  585113 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1205 20:30:49.053311  585113 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:30:49.073636  585113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1205 20:30:49.094437  585113 ssh_runner.go:195] Run: grep 192.168.39.200	control-plane.minikube.internal$ /etc/hosts
	I1205 20:30:49.098470  585113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.200	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:30:49.112013  585113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:30:49.246312  585113 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:30:49.264250  585113 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000 for IP: 192.168.39.200
	I1205 20:30:49.264301  585113 certs.go:194] generating shared ca certs ...
	I1205 20:30:49.264329  585113 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:30:49.264565  585113 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:30:49.264627  585113 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:30:49.264641  585113 certs.go:256] generating profile certs ...
	I1205 20:30:49.264775  585113 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/client.key
	I1205 20:30:49.264854  585113 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/apiserver.key.5c723d79
	I1205 20:30:49.264894  585113 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/proxy-client.key
	I1205 20:30:49.265026  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:30:49.265094  585113 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:30:49.265109  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:30:49.265144  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:30:49.265179  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:30:49.265215  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:30:49.265258  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:30:49.266137  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:30:49.297886  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:30:49.339461  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:30:49.385855  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:30:49.427676  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1205 20:30:49.466359  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:30:49.492535  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:30:49.518311  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:30:49.543545  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:30:49.567956  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:30:49.592361  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:30:49.616245  585113 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:30:49.633947  585113 ssh_runner.go:195] Run: openssl version
	I1205 20:30:49.640353  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:30:49.652467  585113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:30:49.657353  585113 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:30:49.657440  585113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:30:49.664045  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:30:49.679941  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:30:49.695153  585113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:30:49.700397  585113 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:30:49.700458  585113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:30:49.706786  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:30:49.718994  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:30:49.731470  585113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:30:49.736654  585113 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:30:49.736725  585113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:30:49.743034  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:30:49.755334  585113 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:30:49.760378  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:30:49.766942  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:30:49.773911  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:30:49.780556  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:30:49.787004  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:30:49.793473  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:30:49.800009  585113 kubeadm.go:392] StartCluster: {Name:embed-certs-789000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-789000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:30:49.800118  585113 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:30:49.800163  585113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:30:49.844520  585113 cri.go:89] found id: ""
	I1205 20:30:49.844620  585113 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:30:49.857604  585113 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 20:30:49.857640  585113 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 20:30:49.857702  585113 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:30:49.870235  585113 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:30:49.871318  585113 kubeconfig.go:125] found "embed-certs-789000" server: "https://192.168.39.200:8443"
	I1205 20:30:49.873416  585113 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:30:49.884281  585113 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.200
	I1205 20:30:49.884331  585113 kubeadm.go:1160] stopping kube-system containers ...
	I1205 20:30:49.884348  585113 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:30:49.884410  585113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:30:49.930238  585113 cri.go:89] found id: ""
	I1205 20:30:49.930351  585113 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:30:49.947762  585113 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:30:49.957878  585113 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:30:49.957902  585113 kubeadm.go:157] found existing configuration files:
	
	I1205 20:30:49.957960  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:30:49.967261  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:30:49.967342  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:30:49.977868  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:30:49.987715  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:30:49.987777  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:30:49.998157  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:30:50.008224  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:30:50.008334  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:30:50.018748  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:30:50.028204  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:30:50.028287  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:30:50.038459  585113 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:30:50.049458  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:50.175199  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:47.606601  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:47.607065  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:47.607098  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:47.607001  586577 retry.go:31] will retry after 1.007867893s: waiting for machine to come up
	I1205 20:30:48.617140  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:48.617641  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:48.617674  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:48.617608  586577 retry.go:31] will retry after 1.15317606s: waiting for machine to come up
	I1205 20:30:49.773126  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:49.773670  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:49.773699  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:49.773620  586577 retry.go:31] will retry after 1.342422822s: waiting for machine to come up
	I1205 20:30:51.117592  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:51.118034  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:51.118065  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:51.117973  586577 retry.go:31] will retry after 1.575794078s: waiting for machine to come up
	I1205 20:30:51.203131  585113 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.027881984s)
	I1205 20:30:51.203193  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:51.415679  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:51.500984  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:51.598883  585113 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:30:51.598986  585113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:30:52.099206  585113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:30:52.599755  585113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:30:52.619189  585113 api_server.go:72] duration metric: took 1.020303049s to wait for apiserver process to appear ...
	I1205 20:30:52.619236  585113 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:30:52.619268  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:52.619903  585113 api_server.go:269] stopped: https://192.168.39.200:8443/healthz: Get "https://192.168.39.200:8443/healthz": dial tcp 192.168.39.200:8443: connect: connection refused
	I1205 20:30:53.119501  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:55.342363  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:30:55.342398  585113 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:30:55.342418  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:55.471683  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:30:55.471729  585113 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:30:55.619946  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:55.634855  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:30:55.634906  585113 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:30:56.119928  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:56.128358  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:30:56.128396  585113 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:30:56.620047  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:56.625869  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I1205 20:30:56.633658  585113 api_server.go:141] control plane version: v1.31.2
	I1205 20:30:56.633698  585113 api_server.go:131] duration metric: took 4.014451973s to wait for apiserver health ...
	I1205 20:30:56.633712  585113 cni.go:84] Creating CNI manager for ""
	I1205 20:30:56.633721  585113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:30:56.635658  585113 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:30:52.695389  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:52.695838  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:52.695868  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:52.695784  586577 retry.go:31] will retry after 2.377931285s: waiting for machine to come up
	I1205 20:30:55.076859  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:55.077428  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:55.077469  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:55.077377  586577 retry.go:31] will retry after 2.586837249s: waiting for machine to come up
	I1205 20:30:56.637276  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:30:56.649131  585113 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:30:56.670981  585113 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:30:56.682424  585113 system_pods.go:59] 8 kube-system pods found
	I1205 20:30:56.682497  585113 system_pods.go:61] "coredns-7c65d6cfc9-hrrjc" [43d8b550-f29d-4a84-a2fc-b456abc486c2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:30:56.682508  585113 system_pods.go:61] "etcd-embed-certs-789000" [99f232e4-1bc8-4f98-8bcf-8aa61d66158b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:30:56.682519  585113 system_pods.go:61] "kube-apiserver-embed-certs-789000" [d1d11749-0ddc-4172-aaa9-bca00c64c912] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:30:56.682528  585113 system_pods.go:61] "kube-controller-manager-embed-certs-789000" [b291c993-cd10-4d0f-8c3e-a6db726cf83a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:30:56.682536  585113 system_pods.go:61] "kube-proxy-h79dj" [80abe907-24e7-4001-90a6-f4d10fd9fc6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:30:56.682544  585113 system_pods.go:61] "kube-scheduler-embed-certs-789000" [490d7afa-24fd-43c8-8088-539bb7e1eb9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 20:30:56.682556  585113 system_pods.go:61] "metrics-server-6867b74b74-tlsjl" [cd1d73a4-27d1-4e68-b7d8-6da497fc4e53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:30:56.682570  585113 system_pods.go:61] "storage-provisioner" [3246e383-4f15-4222-a50c-c5b243fda12a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:30:56.682579  585113 system_pods.go:74] duration metric: took 11.566899ms to wait for pod list to return data ...
	I1205 20:30:56.682598  585113 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:30:56.687073  585113 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:30:56.687172  585113 node_conditions.go:123] node cpu capacity is 2
	I1205 20:30:56.687222  585113 node_conditions.go:105] duration metric: took 4.613225ms to run NodePressure ...
	I1205 20:30:56.687273  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:56.981686  585113 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 20:30:56.985944  585113 kubeadm.go:739] kubelet initialised
	I1205 20:30:56.985968  585113 kubeadm.go:740] duration metric: took 4.256434ms waiting for restarted kubelet to initialise ...
	I1205 20:30:56.985976  585113 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:30:56.991854  585113 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hrrjc" in "kube-system" namespace to be "Ready" ...
	I1205 20:30:58.997499  585113 pod_ready.go:103] pod "coredns-7c65d6cfc9-hrrjc" in "kube-system" namespace has status "Ready":"False"
	I1205 20:30:57.667200  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:57.667644  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:57.667681  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:57.667592  586577 retry.go:31] will retry after 2.856276116s: waiting for machine to come up
	I1205 20:31:00.525334  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:00.525796  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:31:00.525830  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:31:00.525740  586577 retry.go:31] will retry after 5.119761936s: waiting for machine to come up
	I1205 20:31:00.999102  585113 pod_ready.go:103] pod "coredns-7c65d6cfc9-hrrjc" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:01.500344  585113 pod_ready.go:93] pod "coredns-7c65d6cfc9-hrrjc" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:01.500371  585113 pod_ready.go:82] duration metric: took 4.508490852s for pod "coredns-7c65d6cfc9-hrrjc" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:01.500382  585113 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:03.506621  585113 pod_ready.go:103] pod "etcd-embed-certs-789000" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:05.007677  585113 pod_ready.go:93] pod "etcd-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:05.007703  585113 pod_ready.go:82] duration metric: took 3.507315826s for pod "etcd-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:05.007713  585113 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:05.646790  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.647230  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has current primary IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.647264  585602 main.go:141] libmachine: (old-k8s-version-386085) Found IP for machine: 192.168.72.144
	I1205 20:31:05.647278  585602 main.go:141] libmachine: (old-k8s-version-386085) Reserving static IP address...
	I1205 20:31:05.647796  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "old-k8s-version-386085", mac: "52:54:00:6a:06:a4", ip: "192.168.72.144"} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.647834  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | skip adding static IP to network mk-old-k8s-version-386085 - found existing host DHCP lease matching {name: "old-k8s-version-386085", mac: "52:54:00:6a:06:a4", ip: "192.168.72.144"}
	I1205 20:31:05.647856  585602 main.go:141] libmachine: (old-k8s-version-386085) Reserved static IP address: 192.168.72.144
	I1205 20:31:05.647872  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | Getting to WaitForSSH function...
	I1205 20:31:05.647889  585602 main.go:141] libmachine: (old-k8s-version-386085) Waiting for SSH to be available...
	I1205 20:31:05.650296  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.650610  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.650643  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.650742  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | Using SSH client type: external
	I1205 20:31:05.650779  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa (-rw-------)
	I1205 20:31:05.650816  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:31:05.650837  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | About to run SSH command:
	I1205 20:31:05.650851  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | exit 0
	I1205 20:31:05.776876  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | SSH cmd err, output: <nil>: 
	I1205 20:31:05.777311  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetConfigRaw
	I1205 20:31:05.777948  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:31:05.780609  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.781053  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.781091  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.781319  585602 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/config.json ...
	I1205 20:31:05.781585  585602 machine.go:93] provisionDockerMachine start ...
	I1205 20:31:05.781607  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:05.781942  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:05.784729  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.785155  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.785191  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.785326  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:05.785491  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:05.785659  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:05.785886  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:05.786078  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:05.786309  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:05.786323  585602 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:31:05.893034  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 20:31:05.893079  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetMachineName
	I1205 20:31:05.893388  585602 buildroot.go:166] provisioning hostname "old-k8s-version-386085"
	I1205 20:31:05.893426  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetMachineName
	I1205 20:31:05.893623  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:05.896484  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.896883  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.896910  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.897031  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:05.897252  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:05.897441  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:05.897615  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:05.897796  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:05.897965  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:05.897977  585602 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-386085 && echo "old-k8s-version-386085" | sudo tee /etc/hostname
	I1205 20:31:06.017910  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-386085
	
	I1205 20:31:06.017939  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.020956  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.021298  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.021332  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.021494  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.021678  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.021863  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.021995  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.022137  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:06.022325  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:06.022342  585602 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-386085' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-386085/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-386085' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:31:06.138200  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:31:06.138234  585602 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:31:06.138261  585602 buildroot.go:174] setting up certificates
	I1205 20:31:06.138274  585602 provision.go:84] configureAuth start
	I1205 20:31:06.138287  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetMachineName
	I1205 20:31:06.138588  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:31:06.141488  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.141909  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.141965  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.142096  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.144144  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.144720  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.144742  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.144951  585602 provision.go:143] copyHostCerts
	I1205 20:31:06.145020  585602 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:31:06.145031  585602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:31:06.145085  585602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:31:06.145206  585602 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:31:06.145219  585602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:31:06.145248  585602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:31:06.145335  585602 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:31:06.145346  585602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:31:06.145376  585602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:31:06.145452  585602 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-386085 san=[127.0.0.1 192.168.72.144 localhost minikube old-k8s-version-386085]
	I1205 20:31:06.276466  585602 provision.go:177] copyRemoteCerts
	I1205 20:31:06.276530  585602 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:31:06.276559  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.279218  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.279550  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.279578  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.279766  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.279990  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.280152  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.280317  585602 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:31:06.362479  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:31:06.387631  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1205 20:31:06.413110  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:31:06.437931  585602 provision.go:87] duration metric: took 299.641033ms to configureAuth
	I1205 20:31:06.437962  585602 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:31:06.438176  585602 config.go:182] Loaded profile config "old-k8s-version-386085": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 20:31:06.438272  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.441059  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.441413  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.441444  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.441655  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.441846  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.441992  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.442174  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.442379  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:06.442552  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:06.442568  585602 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:31:06.655666  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:31:06.655699  585602 machine.go:96] duration metric: took 874.099032ms to provisionDockerMachine
	I1205 20:31:06.655713  585602 start.go:293] postStartSetup for "old-k8s-version-386085" (driver="kvm2")
	I1205 20:31:06.655723  585602 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:31:06.655752  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.656082  585602 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:31:06.656115  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.658835  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.659178  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.659229  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.659378  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.659636  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.659808  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.659971  585602 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:31:06.744484  585602 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:31:06.749025  585602 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:31:06.749060  585602 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:31:06.749134  585602 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:31:06.749273  585602 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:31:06.749411  585602 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:31:06.760720  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:06.785449  585602 start.go:296] duration metric: took 129.720092ms for postStartSetup
	I1205 20:31:06.785500  585602 fix.go:56] duration metric: took 23.328073686s for fixHost
	I1205 20:31:06.785526  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.788417  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.788797  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.788828  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.789049  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.789296  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.789483  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.789688  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.789870  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:06.790046  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:06.790065  585602 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:31:06.897781  585929 start.go:364] duration metric: took 3m3.751494327s to acquireMachinesLock for "default-k8s-diff-port-942599"
	I1205 20:31:06.897847  585929 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:31:06.897858  585929 fix.go:54] fixHost starting: 
	I1205 20:31:06.898355  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:31:06.898419  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:31:06.916556  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40927
	I1205 20:31:06.917111  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:31:06.917648  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:31:06.917674  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:31:06.918014  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:31:06.918256  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:06.918402  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetState
	I1205 20:31:06.920077  585929 fix.go:112] recreateIfNeeded on default-k8s-diff-port-942599: state=Stopped err=<nil>
	I1205 20:31:06.920105  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	W1205 20:31:06.920257  585929 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 20:31:06.922145  585929 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-942599" ...
	I1205 20:31:06.923548  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Start
	I1205 20:31:06.923770  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Ensuring networks are active...
	I1205 20:31:06.924750  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Ensuring network default is active
	I1205 20:31:06.925240  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Ensuring network mk-default-k8s-diff-port-942599 is active
	I1205 20:31:06.925721  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Getting domain xml...
	I1205 20:31:06.926719  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Creating domain...
	I1205 20:31:06.897579  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430666.872047181
	
	I1205 20:31:06.897606  585602 fix.go:216] guest clock: 1733430666.872047181
	I1205 20:31:06.897615  585602 fix.go:229] Guest: 2024-12-05 20:31:06.872047181 +0000 UTC Remote: 2024-12-05 20:31:06.785506394 +0000 UTC m=+234.970971247 (delta=86.540787ms)
	I1205 20:31:06.897679  585602 fix.go:200] guest clock delta is within tolerance: 86.540787ms
	I1205 20:31:06.897691  585602 start.go:83] releasing machines lock for "old-k8s-version-386085", held for 23.440303187s
	I1205 20:31:06.897727  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.898085  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:31:06.901127  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.901530  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.901567  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.901719  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.902413  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.902626  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.902776  585602 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:31:06.902827  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.902878  585602 ssh_runner.go:195] Run: cat /version.json
	I1205 20:31:06.902903  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.905664  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.905912  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.906050  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.906086  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.906256  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.906341  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.906367  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.906411  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.906517  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.906613  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.906684  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.906837  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.906849  585602 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:31:06.907112  585602 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:31:06.986078  585602 ssh_runner.go:195] Run: systemctl --version
	I1205 20:31:07.009500  585602 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:31:07.159146  585602 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:31:07.166263  585602 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:31:07.166358  585602 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:31:07.186021  585602 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:31:07.186063  585602 start.go:495] detecting cgroup driver to use...
	I1205 20:31:07.186140  585602 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:31:07.205074  585602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:31:07.221207  585602 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:31:07.221268  585602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:31:07.236669  585602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:31:07.252848  585602 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:31:07.369389  585602 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:31:07.504993  585602 docker.go:233] disabling docker service ...
	I1205 20:31:07.505101  585602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:31:07.523294  585602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:31:07.538595  585602 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:31:07.687830  585602 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:31:07.816176  585602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:31:07.833624  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:31:07.853409  585602 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1205 20:31:07.853478  585602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:07.865346  585602 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:31:07.865426  585602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:07.877962  585602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:07.889255  585602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:07.901632  585602 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:31:07.916169  585602 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:31:07.927092  585602 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:31:07.927169  585602 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:31:07.942288  585602 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:31:07.953314  585602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:08.092156  585602 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:31:08.205715  585602 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:31:08.205799  585602 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:31:08.214280  585602 start.go:563] Will wait 60s for crictl version
	I1205 20:31:08.214351  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:08.220837  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:31:08.265983  585602 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:31:08.266065  585602 ssh_runner.go:195] Run: crio --version
	I1205 20:31:08.295839  585602 ssh_runner.go:195] Run: crio --version
	I1205 20:31:08.327805  585602 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1205 20:31:07.014634  585113 pod_ready.go:103] pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:08.018024  585113 pod_ready.go:93] pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:08.018062  585113 pod_ready.go:82] duration metric: took 3.010340127s for pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.018080  585113 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.024700  585113 pod_ready.go:93] pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:08.024731  585113 pod_ready.go:82] duration metric: took 6.639434ms for pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.024744  585113 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-h79dj" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.030379  585113 pod_ready.go:93] pod "kube-proxy-h79dj" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:08.030399  585113 pod_ready.go:82] duration metric: took 5.648086ms for pod "kube-proxy-h79dj" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.030408  585113 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.036191  585113 pod_ready.go:93] pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:08.036211  585113 pod_ready.go:82] duration metric: took 5.797344ms for pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.036223  585113 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:10.051737  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:08.329278  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:31:08.332352  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:08.332700  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:08.332747  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:08.332930  585602 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1205 20:31:08.337611  585602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:08.350860  585602 kubeadm.go:883] updating cluster {Name:old-k8s-version-386085 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-386085 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:31:08.351016  585602 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 20:31:08.351090  585602 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:08.403640  585602 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 20:31:08.403716  585602 ssh_runner.go:195] Run: which lz4
	I1205 20:31:08.408211  585602 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 20:31:08.413136  585602 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:31:08.413168  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1205 20:31:10.209351  585602 crio.go:462] duration metric: took 1.801169802s to copy over tarball
	I1205 20:31:10.209438  585602 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:31:08.255781  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting to get IP...
	I1205 20:31:08.256721  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.257183  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.257262  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:08.257164  586715 retry.go:31] will retry after 301.077952ms: waiting for machine to come up
	I1205 20:31:08.559682  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.560187  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.560216  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:08.560130  586715 retry.go:31] will retry after 364.457823ms: waiting for machine to come up
	I1205 20:31:08.926774  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.927371  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.927401  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:08.927274  586715 retry.go:31] will retry after 461.958198ms: waiting for machine to come up
	I1205 20:31:09.390861  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:09.391502  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:09.391531  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:09.391432  586715 retry.go:31] will retry after 587.049038ms: waiting for machine to come up
	I1205 20:31:09.980451  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:09.980999  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:09.981026  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:09.980932  586715 retry.go:31] will retry after 499.551949ms: waiting for machine to come up
	I1205 20:31:10.482653  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:10.483188  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:10.483219  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:10.483135  586715 retry.go:31] will retry after 749.476034ms: waiting for machine to come up
	I1205 20:31:11.233788  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:11.234286  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:11.234315  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:11.234227  586715 retry.go:31] will retry after 768.81557ms: waiting for machine to come up
	I1205 20:31:12.004904  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:12.005427  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:12.005460  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:12.005382  586715 retry.go:31] will retry after 1.360132177s: waiting for machine to come up
	I1205 20:31:12.549406  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:15.043540  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:13.303553  585602 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.094044744s)
	I1205 20:31:13.303598  585602 crio.go:469] duration metric: took 3.094215888s to extract the tarball
	I1205 20:31:13.303610  585602 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:31:13.350989  585602 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:13.388660  585602 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 20:31:13.388702  585602 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 20:31:13.388814  585602 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:13.388822  585602 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.388832  585602 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.388853  585602 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.388881  585602 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.388904  585602 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1205 20:31:13.388823  585602 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.388859  585602 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.390414  585602 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1205 20:31:13.390924  585602 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.390941  585602 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.390924  585602 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.391016  585602 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.390927  585602 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.391373  585602 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.391378  585602 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:13.565006  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.577450  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1205 20:31:13.584653  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.597086  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.619848  585602 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1205 20:31:13.619899  585602 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.619955  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.623277  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.628407  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.697151  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.703111  585602 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1205 20:31:13.703167  585602 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1205 20:31:13.703219  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.736004  585602 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1205 20:31:13.736059  585602 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.736058  585602 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1205 20:31:13.736078  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.736094  585602 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.736104  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.736135  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.736187  585602 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1205 20:31:13.736207  585602 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.736235  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.783651  585602 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1205 20:31:13.783706  585602 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.783758  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.787597  585602 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1205 20:31:13.787649  585602 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.787656  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 20:31:13.787692  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.828445  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.828491  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.828544  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.828573  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.828616  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.828635  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.890937  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 20:31:13.992480  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.992480  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.992600  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.992661  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.992725  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.992780  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:14.095364  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 20:31:14.095462  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:14.163224  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1205 20:31:14.163320  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:14.163339  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:14.163420  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 20:31:14.163510  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:14.243805  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1205 20:31:14.243860  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1205 20:31:14.243881  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1205 20:31:14.287718  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1205 20:31:14.290994  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1205 20:31:14.291049  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1205 20:31:14.579648  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:14.728232  585602 cache_images.go:92] duration metric: took 1.339506459s to LoadCachedImages
	W1205 20:31:14.728389  585602 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I1205 20:31:14.728417  585602 kubeadm.go:934] updating node { 192.168.72.144 8443 v1.20.0 crio true true} ...
	I1205 20:31:14.728570  585602 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-386085 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386085 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:31:14.728672  585602 ssh_runner.go:195] Run: crio config
	I1205 20:31:14.778932  585602 cni.go:84] Creating CNI manager for ""
	I1205 20:31:14.778957  585602 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:31:14.778967  585602 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:31:14.778987  585602 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.144 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-386085 NodeName:old-k8s-version-386085 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1205 20:31:14.779131  585602 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-386085"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:31:14.779196  585602 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1205 20:31:14.792400  585602 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:31:14.792494  585602 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:31:14.802873  585602 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1205 20:31:14.821562  585602 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:31:14.839442  585602 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1205 20:31:14.861314  585602 ssh_runner.go:195] Run: grep 192.168.72.144	control-plane.minikube.internal$ /etc/hosts
	I1205 20:31:14.865457  585602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:14.878278  585602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:15.002193  585602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:31:15.030699  585602 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085 for IP: 192.168.72.144
	I1205 20:31:15.030734  585602 certs.go:194] generating shared ca certs ...
	I1205 20:31:15.030758  585602 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:31:15.030975  585602 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:31:15.031027  585602 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:31:15.031048  585602 certs.go:256] generating profile certs ...
	I1205 20:31:15.031206  585602 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.key
	I1205 20:31:15.031276  585602 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.key.87b35b18
	I1205 20:31:15.031324  585602 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.key
	I1205 20:31:15.031489  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:31:15.031535  585602 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:31:15.031550  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:31:15.031581  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:31:15.031612  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:31:15.031644  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:31:15.031698  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:15.032410  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:31:15.063090  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:31:15.094212  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:31:15.124685  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:31:15.159953  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1205 20:31:15.204250  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:31:15.237483  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:31:15.276431  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:31:15.303774  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:31:15.328872  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:31:15.353852  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:31:15.380916  585602 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:31:15.401082  585602 ssh_runner.go:195] Run: openssl version
	I1205 20:31:15.407442  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:31:15.420377  585602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:15.425721  585602 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:15.425800  585602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:15.432475  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:31:15.446140  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:31:15.459709  585602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:31:15.465165  585602 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:31:15.465241  585602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:31:15.471609  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:31:15.484139  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:31:15.496636  585602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:31:15.501575  585602 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:31:15.501634  585602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:31:15.507814  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:31:15.521234  585602 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:31:15.526452  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:31:15.532999  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:31:15.540680  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:31:15.547455  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:31:15.553996  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:31:15.560574  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:31:15.568489  585602 kubeadm.go:392] StartCluster: {Name:old-k8s-version-386085 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-386085 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:31:15.568602  585602 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:31:15.568682  585602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:31:15.610693  585602 cri.go:89] found id: ""
	I1205 20:31:15.610808  585602 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:31:15.622685  585602 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 20:31:15.622709  585602 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 20:31:15.622764  585602 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:31:15.633754  585602 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:31:15.634922  585602 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-386085" does not appear in /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:31:15.635682  585602 kubeconfig.go:62] /home/jenkins/minikube-integration/20052-530897/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-386085" cluster setting kubeconfig missing "old-k8s-version-386085" context setting]
	I1205 20:31:15.636878  585602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:31:15.719767  585602 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:31:15.731576  585602 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.144
	I1205 20:31:15.731622  585602 kubeadm.go:1160] stopping kube-system containers ...
	I1205 20:31:15.731639  585602 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:31:15.731705  585602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:31:15.777769  585602 cri.go:89] found id: ""
	I1205 20:31:15.777875  585602 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:31:15.797121  585602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:31:15.807961  585602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:31:15.807991  585602 kubeadm.go:157] found existing configuration files:
	
	I1205 20:31:15.808042  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:31:15.818177  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:31:15.818270  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:31:15.829092  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:31:15.839471  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:31:15.839564  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:31:15.850035  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:31:15.859907  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:31:15.859984  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:31:15.870882  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:31:15.881475  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:31:15.881549  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:31:15.892078  585602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:31:15.904312  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:16.042308  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:16.787487  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:13.367666  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:13.368154  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:13.368185  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:13.368096  586715 retry.go:31] will retry after 1.319101375s: waiting for machine to come up
	I1205 20:31:14.689562  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:14.690039  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:14.690067  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:14.689996  586715 retry.go:31] will retry after 2.267379471s: waiting for machine to come up
	I1205 20:31:16.959412  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:16.959882  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:16.959915  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:16.959804  586715 retry.go:31] will retry after 2.871837018s: waiting for machine to come up
	I1205 20:31:17.044878  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:19.543265  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:17.036864  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:17.128855  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:17.219276  585602 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:31:17.219380  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:17.720206  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:18.219623  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:18.719555  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:19.219776  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:19.719967  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:20.219686  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:20.719806  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:21.219875  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:21.719915  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:19.834750  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:19.835299  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:19.835326  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:19.835203  586715 retry.go:31] will retry after 2.740879193s: waiting for machine to come up
	I1205 20:31:22.577264  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:22.577746  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:22.577775  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:22.577709  586715 retry.go:31] will retry after 3.807887487s: waiting for machine to come up
	I1205 20:31:22.043635  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:24.543255  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:22.219930  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:22.719848  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:23.219674  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:23.719903  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:24.220505  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:24.719726  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:25.220161  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:25.720115  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:26.220399  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:26.719567  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:27.669618  585025 start.go:364] duration metric: took 59.106849765s to acquireMachinesLock for "no-preload-816185"
	I1205 20:31:27.669680  585025 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:31:27.669689  585025 fix.go:54] fixHost starting: 
	I1205 20:31:27.670111  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:31:27.670153  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:31:27.689600  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40519
	I1205 20:31:27.690043  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:31:27.690508  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:31:27.690530  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:31:27.690931  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:31:27.691146  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:27.691279  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetState
	I1205 20:31:27.692881  585025 fix.go:112] recreateIfNeeded on no-preload-816185: state=Stopped err=<nil>
	I1205 20:31:27.692905  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	W1205 20:31:27.693059  585025 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 20:31:27.694833  585025 out.go:177] * Restarting existing kvm2 VM for "no-preload-816185" ...
	I1205 20:31:26.389296  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.389828  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Found IP for machine: 192.168.50.96
	I1205 20:31:26.389866  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has current primary IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.389876  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Reserving static IP address...
	I1205 20:31:26.390321  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Reserved static IP address: 192.168.50.96
	I1205 20:31:26.390354  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for SSH to be available...
	I1205 20:31:26.390380  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-942599", mac: "52:54:00:f6:dd:0f", ip: "192.168.50.96"} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.390404  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | skip adding static IP to network mk-default-k8s-diff-port-942599 - found existing host DHCP lease matching {name: "default-k8s-diff-port-942599", mac: "52:54:00:f6:dd:0f", ip: "192.168.50.96"}
	I1205 20:31:26.390420  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Getting to WaitForSSH function...
	I1205 20:31:26.392509  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.392875  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.392912  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.392933  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Using SSH client type: external
	I1205 20:31:26.392988  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa (-rw-------)
	I1205 20:31:26.393057  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:31:26.393086  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | About to run SSH command:
	I1205 20:31:26.393105  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | exit 0
	I1205 20:31:26.520867  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | SSH cmd err, output: <nil>: 
	I1205 20:31:26.521212  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetConfigRaw
	I1205 20:31:26.521857  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetIP
	I1205 20:31:26.524512  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.524853  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.524883  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.525141  585929 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/config.json ...
	I1205 20:31:26.525404  585929 machine.go:93] provisionDockerMachine start ...
	I1205 20:31:26.525425  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:26.525639  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:26.527806  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.528094  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.528121  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.528257  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:26.528474  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.528635  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.528771  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:26.528902  585929 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:26.529132  585929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I1205 20:31:26.529147  585929 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:31:26.645385  585929 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 20:31:26.645429  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetMachineName
	I1205 20:31:26.645719  585929 buildroot.go:166] provisioning hostname "default-k8s-diff-port-942599"
	I1205 20:31:26.645751  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetMachineName
	I1205 20:31:26.645962  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:26.648906  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.649316  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.649346  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.649473  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:26.649686  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.649880  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.649998  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:26.650161  585929 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:26.650338  585929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I1205 20:31:26.650354  585929 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-942599 && echo "default-k8s-diff-port-942599" | sudo tee /etc/hostname
	I1205 20:31:26.780217  585929 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-942599
	
	I1205 20:31:26.780253  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:26.783240  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.783628  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.783660  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.783804  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:26.783997  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.784162  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.784321  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:26.784530  585929 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:26.784747  585929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I1205 20:31:26.784766  585929 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-942599' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-942599/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-942599' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:31:26.909975  585929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:31:26.910006  585929 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:31:26.910087  585929 buildroot.go:174] setting up certificates
	I1205 20:31:26.910101  585929 provision.go:84] configureAuth start
	I1205 20:31:26.910114  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetMachineName
	I1205 20:31:26.910440  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetIP
	I1205 20:31:26.913667  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.914067  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.914094  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.914321  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:26.917031  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.917430  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.917462  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.917608  585929 provision.go:143] copyHostCerts
	I1205 20:31:26.917681  585929 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:31:26.917706  585929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:31:26.917772  585929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:31:26.917889  585929 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:31:26.917900  585929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:31:26.917935  585929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:31:26.918013  585929 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:31:26.918023  585929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:31:26.918065  585929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:31:26.918163  585929 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-942599 san=[127.0.0.1 192.168.50.96 default-k8s-diff-port-942599 localhost minikube]
	I1205 20:31:27.003691  585929 provision.go:177] copyRemoteCerts
	I1205 20:31:27.003783  585929 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:31:27.003821  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.006311  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.006632  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.006665  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.006820  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.007011  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.007153  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.007274  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:31:27.094973  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:31:27.121684  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1205 20:31:27.146420  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:31:27.171049  585929 provision.go:87] duration metric: took 260.930345ms to configureAuth
	I1205 20:31:27.171083  585929 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:31:27.171268  585929 config.go:182] Loaded profile config "default-k8s-diff-port-942599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:31:27.171385  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.174287  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.174677  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.174717  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.174946  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.175168  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.175338  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.175531  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.175703  585929 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:27.175927  585929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I1205 20:31:27.175959  585929 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:31:27.416697  585929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:31:27.416724  585929 machine.go:96] duration metric: took 891.305367ms to provisionDockerMachine
	I1205 20:31:27.416737  585929 start.go:293] postStartSetup for "default-k8s-diff-port-942599" (driver="kvm2")
	I1205 20:31:27.416748  585929 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:31:27.416786  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:27.417143  585929 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:31:27.417183  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.419694  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.420041  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.420072  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.420259  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.420488  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.420681  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.420813  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:31:27.507592  585929 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:31:27.512178  585929 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:31:27.512209  585929 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:31:27.512297  585929 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:31:27.512416  585929 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:31:27.512544  585929 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:31:27.522860  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:27.550167  585929 start.go:296] duration metric: took 133.414654ms for postStartSetup
	I1205 20:31:27.550211  585929 fix.go:56] duration metric: took 20.652352836s for fixHost
	I1205 20:31:27.550240  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.553056  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.553456  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.553490  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.553631  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.553822  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.554007  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.554166  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.554372  585929 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:27.554584  585929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I1205 20:31:27.554603  585929 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:31:27.669428  585929 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430687.619179277
	
	I1205 20:31:27.669455  585929 fix.go:216] guest clock: 1733430687.619179277
	I1205 20:31:27.669467  585929 fix.go:229] Guest: 2024-12-05 20:31:27.619179277 +0000 UTC Remote: 2024-12-05 20:31:27.550217419 +0000 UTC m=+204.551998169 (delta=68.961858ms)
	I1205 20:31:27.669506  585929 fix.go:200] guest clock delta is within tolerance: 68.961858ms
	I1205 20:31:27.669514  585929 start.go:83] releasing machines lock for "default-k8s-diff-port-942599", held for 20.771694403s
	I1205 20:31:27.669559  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:27.669877  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetIP
	I1205 20:31:27.672547  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.672978  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.673009  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.673224  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:27.673788  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:27.673992  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:27.674125  585929 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:31:27.674176  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.674201  585929 ssh_runner.go:195] Run: cat /version.json
	I1205 20:31:27.674231  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.677006  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.677388  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.677418  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.677437  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.677565  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.677745  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.677919  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.677925  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.677948  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.678115  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.678107  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:31:27.678258  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.678382  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.678527  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:31:27.790786  585929 ssh_runner.go:195] Run: systemctl --version
	I1205 20:31:27.797092  585929 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:31:27.946053  585929 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:31:27.953979  585929 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:31:27.954073  585929 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:31:27.975059  585929 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:31:27.975090  585929 start.go:495] detecting cgroup driver to use...
	I1205 20:31:27.975160  585929 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:31:27.991738  585929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:31:28.006412  585929 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:31:28.006529  585929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:31:28.021329  585929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:31:28.037390  585929 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:31:28.155470  585929 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:31:28.326332  585929 docker.go:233] disabling docker service ...
	I1205 20:31:28.326415  585929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:31:28.343299  585929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:31:28.358147  585929 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:31:28.493547  585929 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:31:28.631184  585929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:31:28.647267  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:31:28.670176  585929 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:31:28.670269  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.686230  585929 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:31:28.686312  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.702991  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.715390  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.731909  585929 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:31:28.745042  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.757462  585929 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.779049  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.790960  585929 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:31:28.806652  585929 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:31:28.806724  585929 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:31:28.821835  585929 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:31:28.832688  585929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:28.967877  585929 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:31:29.084571  585929 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:31:29.084666  585929 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:31:29.089892  585929 start.go:563] Will wait 60s for crictl version
	I1205 20:31:29.089958  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:31:29.094021  585929 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:31:29.132755  585929 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:31:29.132843  585929 ssh_runner.go:195] Run: crio --version
	I1205 20:31:29.161779  585929 ssh_runner.go:195] Run: crio --version
	I1205 20:31:29.194415  585929 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:31:27.042893  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:29.545284  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:27.696342  585025 main.go:141] libmachine: (no-preload-816185) Calling .Start
	I1205 20:31:27.696546  585025 main.go:141] libmachine: (no-preload-816185) Ensuring networks are active...
	I1205 20:31:27.697272  585025 main.go:141] libmachine: (no-preload-816185) Ensuring network default is active
	I1205 20:31:27.697720  585025 main.go:141] libmachine: (no-preload-816185) Ensuring network mk-no-preload-816185 is active
	I1205 20:31:27.698153  585025 main.go:141] libmachine: (no-preload-816185) Getting domain xml...
	I1205 20:31:27.698993  585025 main.go:141] libmachine: (no-preload-816185) Creating domain...
	I1205 20:31:29.005551  585025 main.go:141] libmachine: (no-preload-816185) Waiting to get IP...
	I1205 20:31:29.006633  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:29.007124  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:29.007217  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:29.007100  586921 retry.go:31] will retry after 264.716976ms: waiting for machine to come up
	I1205 20:31:29.273821  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:29.274364  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:29.274393  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:29.274318  586921 retry.go:31] will retry after 307.156436ms: waiting for machine to come up
	I1205 20:31:29.582968  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:29.583583  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:29.583621  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:29.583531  586921 retry.go:31] will retry after 335.63624ms: waiting for machine to come up
	I1205 20:31:29.921262  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:29.921823  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:29.921855  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:29.921771  586921 retry.go:31] will retry after 577.408278ms: waiting for machine to come up
	I1205 20:31:30.500556  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:30.501058  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:30.501095  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:30.500999  586921 retry.go:31] will retry after 757.019094ms: waiting for machine to come up
	I1205 20:31:27.220124  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:27.719460  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:28.220187  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:28.719599  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:29.219672  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:29.720450  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:30.220436  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:30.719573  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:31.220357  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:31.720052  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:29.195845  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetIP
	I1205 20:31:29.198779  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:29.199138  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:29.199171  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:29.199365  585929 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1205 20:31:29.204553  585929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:29.217722  585929 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-942599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-942599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.96 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:31:29.217873  585929 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:31:29.217943  585929 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:29.259006  585929 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 20:31:29.259105  585929 ssh_runner.go:195] Run: which lz4
	I1205 20:31:29.264049  585929 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 20:31:29.268978  585929 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:31:29.269019  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 20:31:30.811247  585929 crio.go:462] duration metric: took 1.547244528s to copy over tarball
	I1205 20:31:30.811340  585929 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:31:32.043543  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:34.044420  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:31.260083  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:31.260626  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:31.260658  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:31.260593  586921 retry.go:31] will retry after 593.111543ms: waiting for machine to come up
	I1205 20:31:31.854850  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:31.855286  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:31.855316  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:31.855224  586921 retry.go:31] will retry after 832.693762ms: waiting for machine to come up
	I1205 20:31:32.690035  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:32.690489  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:32.690515  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:32.690448  586921 retry.go:31] will retry after 1.128242733s: waiting for machine to come up
	I1205 20:31:33.820162  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:33.820798  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:33.820831  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:33.820732  586921 retry.go:31] will retry after 1.331730925s: waiting for machine to come up
	I1205 20:31:35.154230  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:35.154661  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:35.154690  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:35.154590  586921 retry.go:31] will retry after 2.19623815s: waiting for machine to come up
	I1205 20:31:32.220318  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:32.719780  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:33.220114  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:33.719554  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:34.220187  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:34.720021  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:35.219461  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:35.720334  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.219480  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.720159  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:33.093756  585929 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.282380101s)
	I1205 20:31:33.093791  585929 crio.go:469] duration metric: took 2.282510298s to extract the tarball
	I1205 20:31:33.093802  585929 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:31:33.132232  585929 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:33.188834  585929 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:31:33.188868  585929 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:31:33.188879  585929 kubeadm.go:934] updating node { 192.168.50.96 8444 v1.31.2 crio true true} ...
	I1205 20:31:33.189027  585929 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-942599 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-942599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:31:33.189114  585929 ssh_runner.go:195] Run: crio config
	I1205 20:31:33.235586  585929 cni.go:84] Creating CNI manager for ""
	I1205 20:31:33.235611  585929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:31:33.235621  585929 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:31:33.235644  585929 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.96 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-942599 NodeName:default-k8s-diff-port-942599 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:31:33.235770  585929 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.96
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-942599"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.96"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.96"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:31:33.235835  585929 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:31:33.246737  585929 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:31:33.246829  585929 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:31:33.257763  585929 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1205 20:31:33.276025  585929 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:31:33.294008  585929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1205 20:31:33.311640  585929 ssh_runner.go:195] Run: grep 192.168.50.96	control-plane.minikube.internal$ /etc/hosts
	I1205 20:31:33.315963  585929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.96	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:33.328834  585929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:33.439221  585929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:31:33.457075  585929 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599 for IP: 192.168.50.96
	I1205 20:31:33.457103  585929 certs.go:194] generating shared ca certs ...
	I1205 20:31:33.457131  585929 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:31:33.457337  585929 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:31:33.457407  585929 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:31:33.457420  585929 certs.go:256] generating profile certs ...
	I1205 20:31:33.457528  585929 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/client.key
	I1205 20:31:33.457612  585929 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/apiserver.key.d50b8fb2
	I1205 20:31:33.457668  585929 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/proxy-client.key
	I1205 20:31:33.457824  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:31:33.457870  585929 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:31:33.457885  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:31:33.457924  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:31:33.457959  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:31:33.457989  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:31:33.458044  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:33.459092  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:31:33.502129  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:31:33.533461  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:31:33.572210  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:31:33.597643  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1205 20:31:33.621382  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:31:33.648568  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:31:33.682320  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:31:33.707415  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:31:33.733418  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:31:33.760333  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:31:33.794070  585929 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:31:33.813531  585929 ssh_runner.go:195] Run: openssl version
	I1205 20:31:33.820336  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:31:33.832321  585929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:33.839066  585929 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:33.839135  585929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:33.845526  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:31:33.857376  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:31:33.868864  585929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:31:33.873732  585929 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:31:33.873799  585929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:31:33.881275  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:31:33.893144  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:31:33.904679  585929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:31:33.909686  585929 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:31:33.909760  585929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:31:33.915937  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:31:33.927401  585929 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:31:33.932326  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:31:33.939165  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:31:33.945630  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:31:33.951867  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:31:33.957857  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:31:33.963994  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:31:33.969964  585929 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-942599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-942599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.96 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:31:33.970050  585929 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:31:33.970103  585929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:31:34.016733  585929 cri.go:89] found id: ""
	I1205 20:31:34.016814  585929 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:31:34.027459  585929 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 20:31:34.027478  585929 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 20:31:34.027523  585929 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:31:34.037483  585929 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:31:34.038588  585929 kubeconfig.go:125] found "default-k8s-diff-port-942599" server: "https://192.168.50.96:8444"
	I1205 20:31:34.041140  585929 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:31:34.050903  585929 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.96
	I1205 20:31:34.050938  585929 kubeadm.go:1160] stopping kube-system containers ...
	I1205 20:31:34.050956  585929 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:31:34.051014  585929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:31:34.090840  585929 cri.go:89] found id: ""
	I1205 20:31:34.090932  585929 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:31:34.107686  585929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:31:34.118277  585929 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:31:34.118305  585929 kubeadm.go:157] found existing configuration files:
	
	I1205 20:31:34.118359  585929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1205 20:31:34.127654  585929 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:31:34.127733  585929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:31:34.137295  585929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1205 20:31:34.147005  585929 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:31:34.147076  585929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:31:34.158576  585929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1205 20:31:34.167933  585929 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:31:34.168022  585929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:31:34.177897  585929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1205 20:31:34.187467  585929 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:31:34.187539  585929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:31:34.197825  585929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:31:34.210775  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:34.337491  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:35.308389  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:35.549708  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:35.624390  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:35.706794  585929 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:31:35.706912  585929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.207620  585929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.707990  585929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.727214  585929 api_server.go:72] duration metric: took 1.020418782s to wait for apiserver process to appear ...
	I1205 20:31:36.727257  585929 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:31:36.727289  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:36.727908  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": dial tcp 192.168.50.96:8444: connect: connection refused
	I1205 20:31:37.228102  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:36.544564  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:39.043806  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:37.352371  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:37.352911  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:37.352946  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:37.352862  586921 retry.go:31] will retry after 2.333670622s: waiting for machine to come up
	I1205 20:31:39.688034  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:39.688597  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:39.688630  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:39.688537  586921 retry.go:31] will retry after 2.476657304s: waiting for machine to come up
	I1205 20:31:37.219933  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:37.720360  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:38.219574  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:38.720034  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:39.219449  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:39.719752  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:40.219718  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:40.719771  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:41.219548  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:41.720381  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:42.228416  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:31:42.228489  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:41.044569  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:43.542439  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:45.543063  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:42.168384  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:42.168759  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:42.168781  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:42.168719  586921 retry.go:31] will retry after 3.531210877s: waiting for machine to come up
	I1205 20:31:45.701387  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.701831  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has current primary IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.701868  585025 main.go:141] libmachine: (no-preload-816185) Found IP for machine: 192.168.61.37
	I1205 20:31:45.701882  585025 main.go:141] libmachine: (no-preload-816185) Reserving static IP address...
	I1205 20:31:45.702270  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "no-preload-816185", mac: "52:54:00:5f:85:a7", ip: "192.168.61.37"} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:45.702313  585025 main.go:141] libmachine: (no-preload-816185) DBG | skip adding static IP to network mk-no-preload-816185 - found existing host DHCP lease matching {name: "no-preload-816185", mac: "52:54:00:5f:85:a7", ip: "192.168.61.37"}
	I1205 20:31:45.702327  585025 main.go:141] libmachine: (no-preload-816185) Reserved static IP address: 192.168.61.37
	I1205 20:31:45.702343  585025 main.go:141] libmachine: (no-preload-816185) Waiting for SSH to be available...
	I1205 20:31:45.702355  585025 main.go:141] libmachine: (no-preload-816185) DBG | Getting to WaitForSSH function...
	I1205 20:31:45.704606  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.704941  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:45.704964  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.705115  585025 main.go:141] libmachine: (no-preload-816185) DBG | Using SSH client type: external
	I1205 20:31:45.705146  585025 main.go:141] libmachine: (no-preload-816185) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa (-rw-------)
	I1205 20:31:45.705181  585025 main.go:141] libmachine: (no-preload-816185) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.37 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:31:45.705212  585025 main.go:141] libmachine: (no-preload-816185) DBG | About to run SSH command:
	I1205 20:31:45.705224  585025 main.go:141] libmachine: (no-preload-816185) DBG | exit 0
	I1205 20:31:45.828472  585025 main.go:141] libmachine: (no-preload-816185) DBG | SSH cmd err, output: <nil>: 
	I1205 20:31:45.828882  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetConfigRaw
	I1205 20:31:45.829596  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetIP
	I1205 20:31:45.832338  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.832643  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:45.832671  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.832970  585025 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/config.json ...
	I1205 20:31:45.833244  585025 machine.go:93] provisionDockerMachine start ...
	I1205 20:31:45.833275  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:45.833498  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:45.835937  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.836344  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:45.836375  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.836555  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:45.836744  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:45.836906  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:45.837046  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:45.837207  585025 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:45.837441  585025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.37 22 <nil> <nil>}
	I1205 20:31:45.837456  585025 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:31:45.940890  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 20:31:45.940926  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetMachineName
	I1205 20:31:45.941234  585025 buildroot.go:166] provisioning hostname "no-preload-816185"
	I1205 20:31:45.941262  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetMachineName
	I1205 20:31:45.941453  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:45.944124  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.944537  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:45.944585  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.944677  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:45.944862  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:45.945026  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:45.945169  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:45.945343  585025 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:45.945511  585025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.37 22 <nil> <nil>}
	I1205 20:31:45.945523  585025 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-816185 && echo "no-preload-816185" | sudo tee /etc/hostname
	I1205 20:31:42.220435  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:42.720366  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:43.219567  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:43.719652  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:44.220259  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:44.719556  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:45.219850  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:45.720302  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:46.220377  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:46.720107  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:47.229369  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:31:47.229421  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:46.063755  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-816185
	
	I1205 20:31:46.063794  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:46.066742  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.067177  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.067208  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.067371  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:46.067576  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.067756  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.067937  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:46.068147  585025 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:46.068392  585025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.37 22 <nil> <nil>}
	I1205 20:31:46.068411  585025 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-816185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-816185/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-816185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:31:46.182072  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:31:46.182110  585025 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:31:46.182144  585025 buildroot.go:174] setting up certificates
	I1205 20:31:46.182160  585025 provision.go:84] configureAuth start
	I1205 20:31:46.182172  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetMachineName
	I1205 20:31:46.182490  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetIP
	I1205 20:31:46.185131  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.185461  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.185493  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.185684  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:46.188070  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.188467  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.188499  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.188606  585025 provision.go:143] copyHostCerts
	I1205 20:31:46.188674  585025 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:31:46.188695  585025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:31:46.188753  585025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:31:46.188860  585025 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:31:46.188872  585025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:31:46.188892  585025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:31:46.188973  585025 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:31:46.188980  585025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:31:46.188998  585025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:31:46.189044  585025 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.no-preload-816185 san=[127.0.0.1 192.168.61.37 localhost minikube no-preload-816185]
	I1205 20:31:46.460195  585025 provision.go:177] copyRemoteCerts
	I1205 20:31:46.460323  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:31:46.460394  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:46.463701  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.464171  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.464224  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.464422  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:46.464646  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.464839  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:46.465024  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:31:46.557665  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 20:31:46.583225  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:31:46.608114  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:31:46.633059  585025 provision.go:87] duration metric: took 450.879004ms to configureAuth
	I1205 20:31:46.633100  585025 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:31:46.633319  585025 config.go:182] Loaded profile config "no-preload-816185": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:31:46.633400  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:46.636634  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.637103  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.637138  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.637368  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:46.637624  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.637841  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.638000  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:46.638189  585025 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:46.638425  585025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.37 22 <nil> <nil>}
	I1205 20:31:46.638442  585025 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:31:46.877574  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:31:46.877610  585025 machine.go:96] duration metric: took 1.044347044s to provisionDockerMachine
	I1205 20:31:46.877623  585025 start.go:293] postStartSetup for "no-preload-816185" (driver="kvm2")
	I1205 20:31:46.877634  585025 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:31:46.877668  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:46.878007  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:31:46.878046  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:46.881022  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.881361  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.881422  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.881554  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:46.881741  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.881883  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:46.882045  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:31:46.967997  585025 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:31:46.972667  585025 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:31:46.972697  585025 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:31:46.972770  585025 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:31:46.972844  585025 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:31:46.972931  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:31:46.983157  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:47.009228  585025 start.go:296] duration metric: took 131.588013ms for postStartSetup
	I1205 20:31:47.009272  585025 fix.go:56] duration metric: took 19.33958416s for fixHost
	I1205 20:31:47.009296  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:47.012039  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.012388  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:47.012416  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.012620  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:47.012858  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:47.013022  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:47.013166  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:47.013318  585025 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:47.013490  585025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.37 22 <nil> <nil>}
	I1205 20:31:47.013501  585025 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:31:47.117166  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430707.083043174
	
	I1205 20:31:47.117195  585025 fix.go:216] guest clock: 1733430707.083043174
	I1205 20:31:47.117203  585025 fix.go:229] Guest: 2024-12-05 20:31:47.083043174 +0000 UTC Remote: 2024-12-05 20:31:47.009275956 +0000 UTC m=+361.003271038 (delta=73.767218ms)
	I1205 20:31:47.117226  585025 fix.go:200] guest clock delta is within tolerance: 73.767218ms
	I1205 20:31:47.117232  585025 start.go:83] releasing machines lock for "no-preload-816185", held for 19.447576666s
	I1205 20:31:47.117259  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:47.117541  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetIP
	I1205 20:31:47.120283  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.120627  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:47.120653  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.120805  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:47.121301  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:47.121492  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:47.121612  585025 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:31:47.121656  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:47.121727  585025 ssh_runner.go:195] Run: cat /version.json
	I1205 20:31:47.121750  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:47.124146  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.124387  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.124503  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:47.124530  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.124723  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:47.124745  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.124745  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:47.124922  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:47.124933  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:47.125086  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:47.125126  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:47.125227  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:31:47.125505  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:47.125653  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:31:47.221731  585025 ssh_runner.go:195] Run: systemctl --version
	I1205 20:31:47.228177  585025 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:31:47.377695  585025 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:31:47.384534  585025 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:31:47.384623  585025 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:31:47.402354  585025 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:31:47.402388  585025 start.go:495] detecting cgroup driver to use...
	I1205 20:31:47.402454  585025 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:31:47.426593  585025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:31:47.443953  585025 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:31:47.444011  585025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:31:47.461107  585025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:31:47.477872  585025 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:31:47.617097  585025 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:31:47.780021  585025 docker.go:233] disabling docker service ...
	I1205 20:31:47.780140  585025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:31:47.795745  585025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:31:47.809573  585025 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:31:47.959910  585025 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:31:48.081465  585025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:31:48.096513  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:31:48.116342  585025 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:31:48.116409  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.128016  585025 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:31:48.128095  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.139511  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.151241  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.162858  585025 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:31:48.174755  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.185958  585025 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.203724  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.215682  585025 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:31:48.226478  585025 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:31:48.226551  585025 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:31:48.242781  585025 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:31:48.254921  585025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:48.373925  585025 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:31:48.471515  585025 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:31:48.471625  585025 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:31:48.477640  585025 start.go:563] Will wait 60s for crictl version
	I1205 20:31:48.477707  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:48.481862  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:31:48.521367  585025 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:31:48.521465  585025 ssh_runner.go:195] Run: crio --version
	I1205 20:31:48.552343  585025 ssh_runner.go:195] Run: crio --version
	I1205 20:31:48.583089  585025 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:31:48.043043  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:50.043172  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:48.584504  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetIP
	I1205 20:31:48.587210  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:48.587539  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:48.587568  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:48.587788  585025 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1205 20:31:48.592190  585025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:48.606434  585025 kubeadm.go:883] updating cluster {Name:no-preload-816185 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-816185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.37 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:31:48.606605  585025 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:31:48.606666  585025 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:48.642948  585025 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 20:31:48.642978  585025 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 20:31:48.643061  585025 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:48.643116  585025 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:48.643092  585025 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:48.643168  585025 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:48.643075  585025 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:48.643116  585025 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:48.643248  585025 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1205 20:31:48.643119  585025 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:48.644692  585025 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:48.644712  585025 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1205 20:31:48.644694  585025 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:48.644798  585025 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:48.644800  585025 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:48.644824  585025 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:48.644858  585025 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:48.644824  585025 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:48.811007  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:48.819346  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:48.859678  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1205 20:31:48.864065  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:48.864191  585025 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1205 20:31:48.864249  585025 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:48.864310  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:48.883959  585025 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1205 20:31:48.884022  585025 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:48.884078  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:48.902180  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:48.918167  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:48.946617  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:49.039706  585025 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1205 20:31:49.039760  585025 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:49.039783  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:49.039808  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:49.039869  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:49.039887  585025 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1205 20:31:49.039913  585025 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:49.039938  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:49.039947  585025 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1205 20:31:49.039969  585025 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:49.040001  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:49.040002  585025 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1205 20:31:49.040026  585025 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:49.040069  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:49.098900  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:49.098990  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:49.105551  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:49.105588  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:49.105612  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:49.105646  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:49.201473  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:49.218211  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:49.257277  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:49.257335  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:49.257345  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:49.257479  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:49.316037  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1205 20:31:49.316135  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:49.316159  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 20:31:49.356780  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1205 20:31:49.356906  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1205 20:31:49.382843  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:49.405772  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:49.405863  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:49.428491  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1205 20:31:49.428541  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1205 20:31:49.428563  585025 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 20:31:49.428587  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1205 20:31:49.428611  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 20:31:49.428648  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 20:31:49.487794  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1205 20:31:49.487825  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1205 20:31:49.487893  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1205 20:31:49.487917  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1205 20:31:49.487927  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 20:31:49.488022  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 20:31:49.830311  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:47.219913  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:47.720441  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:48.220220  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:48.719997  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:49.219843  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:49.719591  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:50.220132  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:50.719528  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:51.219674  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:51.720234  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:52.230527  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:31:52.230575  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:52.543415  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:55.042668  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:52.150499  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.721854606s)
	I1205 20:31:52.150547  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1205 20:31:52.150573  585025 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1205 20:31:52.150588  585025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.721911838s)
	I1205 20:31:52.150623  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1205 20:31:52.150627  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1205 20:31:52.150697  585025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: (2.662646854s)
	I1205 20:31:52.150727  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1205 20:31:52.150752  585025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.662648047s)
	I1205 20:31:52.150776  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1205 20:31:52.150785  585025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.662799282s)
	I1205 20:31:52.150804  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1205 20:31:52.150834  585025 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.320487562s)
	I1205 20:31:52.150874  585025 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1205 20:31:52.150907  585025 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:52.150943  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:55.858372  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.707687772s)
	I1205 20:31:55.858414  585025 ssh_runner.go:235] Completed: which crictl: (3.707446137s)
	I1205 20:31:55.858498  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:55.858426  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1205 20:31:55.858580  585025 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 20:31:55.858640  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 20:31:55.901375  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:52.219602  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:52.719522  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:53.220117  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:53.720426  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:54.220177  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:54.720100  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:55.219569  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:55.719796  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:56.219490  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:56.720420  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:57.231370  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:31:57.231415  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:57.612431  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": read tcp 192.168.50.1:36198->192.168.50.96:8444: read: connection reset by peer
	I1205 20:31:57.727638  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:57.728368  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": dial tcp 192.168.50.96:8444: connect: connection refused
	I1205 20:31:57.042989  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:59.043517  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:57.843623  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.984954959s)
	I1205 20:31:57.843662  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1205 20:31:57.843683  585025 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 20:31:57.843731  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 20:31:57.843732  585025 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.942323285s)
	I1205 20:31:57.843821  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:32:00.030765  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.186998467s)
	I1205 20:32:00.030810  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1205 20:32:00.030840  585025 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 20:32:00.030846  585025 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.18699947s)
	I1205 20:32:00.030897  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 20:32:00.030906  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 20:32:00.031026  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:31:57.219497  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:57.720337  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:58.219807  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:58.720112  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:59.219949  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:59.719626  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:00.219871  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:00.719466  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:01.219491  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:01.719760  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:58.227807  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:01.044658  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:03.542453  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:05.542887  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:01.486433  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.455500806s)
	I1205 20:32:01.486479  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1205 20:32:01.486512  585025 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1205 20:32:01.486513  585025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.455460879s)
	I1205 20:32:01.486589  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1205 20:32:01.486592  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1205 20:32:03.658906  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.172262326s)
	I1205 20:32:03.658947  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1205 20:32:03.658979  585025 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:32:03.659024  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:32:04.304774  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 20:32:04.304825  585025 cache_images.go:123] Successfully loaded all cached images
	I1205 20:32:04.304832  585025 cache_images.go:92] duration metric: took 15.661840579s to LoadCachedImages
	I1205 20:32:04.304846  585025 kubeadm.go:934] updating node { 192.168.61.37 8443 v1.31.2 crio true true} ...
	I1205 20:32:04.304983  585025 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-816185 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.37
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-816185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:32:04.305057  585025 ssh_runner.go:195] Run: crio config
	I1205 20:32:04.350303  585025 cni.go:84] Creating CNI manager for ""
	I1205 20:32:04.350332  585025 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:32:04.350352  585025 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:32:04.350383  585025 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.37 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-816185 NodeName:no-preload-816185 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.37"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.37 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:32:04.350534  585025 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.37
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-816185"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.37"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.37"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:32:04.350618  585025 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:32:04.362733  585025 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:32:04.362815  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:32:04.374219  585025 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1205 20:32:04.392626  585025 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:32:04.409943  585025 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I1205 20:32:04.428180  585025 ssh_runner.go:195] Run: grep 192.168.61.37	control-plane.minikube.internal$ /etc/hosts
	I1205 20:32:04.432433  585025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.37	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:32:04.447274  585025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:32:04.591755  585025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:32:04.609441  585025 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185 for IP: 192.168.61.37
	I1205 20:32:04.609472  585025 certs.go:194] generating shared ca certs ...
	I1205 20:32:04.609494  585025 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:32:04.609664  585025 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:32:04.609729  585025 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:32:04.609745  585025 certs.go:256] generating profile certs ...
	I1205 20:32:04.609910  585025 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/client.key
	I1205 20:32:04.609991  585025 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/apiserver.key.e9b85612
	I1205 20:32:04.610027  585025 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/proxy-client.key
	I1205 20:32:04.610146  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:32:04.610173  585025 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:32:04.610182  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:32:04.610216  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:32:04.610264  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:32:04.610313  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:32:04.610377  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:32:04.611264  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:32:04.642976  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:32:04.679840  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:32:04.707526  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:32:04.746333  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 20:32:04.782671  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:32:04.819333  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:32:04.845567  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:32:04.870304  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:32:04.894597  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:32:04.918482  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:32:04.942992  585025 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:32:04.960576  585025 ssh_runner.go:195] Run: openssl version
	I1205 20:32:04.966908  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:32:04.978238  585025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:32:04.982959  585025 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:32:04.983023  585025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:32:04.989070  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:32:05.000979  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:32:05.012901  585025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:32:05.017583  585025 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:32:05.018169  585025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:32:05.025450  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:32:05.037419  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:32:05.050366  585025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:32:05.055211  585025 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:32:05.055255  585025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:32:05.061388  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:32:05.074182  585025 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:32:05.079129  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:32:05.085580  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:32:05.091938  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:32:05.099557  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:32:05.105756  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:32:05.112019  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:32:05.118426  585025 kubeadm.go:392] StartCluster: {Name:no-preload-816185 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-816185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.37 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:32:05.118540  585025 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:32:05.118622  585025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:32:05.162731  585025 cri.go:89] found id: ""
	I1205 20:32:05.162821  585025 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:32:05.174100  585025 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 20:32:05.174127  585025 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 20:32:05.174181  585025 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:32:05.184949  585025 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:32:05.186127  585025 kubeconfig.go:125] found "no-preload-816185" server: "https://192.168.61.37:8443"
	I1205 20:32:05.188601  585025 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:32:05.198779  585025 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.37
	I1205 20:32:05.198815  585025 kubeadm.go:1160] stopping kube-system containers ...
	I1205 20:32:05.198828  585025 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:32:05.198881  585025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:32:05.241175  585025 cri.go:89] found id: ""
	I1205 20:32:05.241247  585025 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:32:05.259698  585025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:32:05.270282  585025 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:32:05.270310  585025 kubeadm.go:157] found existing configuration files:
	
	I1205 20:32:05.270370  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:32:05.280440  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:32:05.280519  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:32:05.290825  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:32:05.300680  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:32:05.300745  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:32:05.311108  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:32:05.320854  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:32:05.320918  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:32:05.331099  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:32:05.340948  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:32:05.341017  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:32:05.351280  585025 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:32:05.361567  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:05.477138  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:02.220337  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:02.720145  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:03.219463  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:03.719913  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:04.219813  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:04.719940  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:05.219830  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:05.720324  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:06.220287  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:06.719584  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:03.228372  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:32:03.228433  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:08.042416  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:10.043011  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:06.259256  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:06.483460  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:06.557633  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:06.666782  585025 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:32:06.666885  585025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:07.167840  585025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:07.667069  585025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:07.701559  585025 api_server.go:72] duration metric: took 1.034769472s to wait for apiserver process to appear ...
	I1205 20:32:07.701592  585025 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:32:07.701612  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:10.640462  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:32:10.640498  585025 api_server.go:103] status: https://192.168.61.37:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:32:10.640521  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:10.647093  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:32:10.647118  585025 api_server.go:103] status: https://192.168.61.37:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:32:10.702286  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:10.711497  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:10.711528  585025 api_server.go:103] status: https://192.168.61.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:07.219989  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:07.720289  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:08.220381  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:08.719947  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:09.219838  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:09.719666  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:10.219756  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:10.720312  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:11.220369  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:11.720004  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:11.202247  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:11.206625  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:11.206650  585025 api_server.go:103] status: https://192.168.61.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:11.702760  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:11.718941  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:11.718974  585025 api_server.go:103] status: https://192.168.61.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:12.202567  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:12.207589  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 200:
	ok
	I1205 20:32:12.214275  585025 api_server.go:141] control plane version: v1.31.2
	I1205 20:32:12.214304  585025 api_server.go:131] duration metric: took 4.512704501s to wait for apiserver health ...
	I1205 20:32:12.214314  585025 cni.go:84] Creating CNI manager for ""
	I1205 20:32:12.214321  585025 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:32:12.216193  585025 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:32:08.229499  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:32:08.229544  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:12.545378  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:15.043628  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:12.217640  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:32:12.241907  585025 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:32:12.262114  585025 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:32:12.275246  585025 system_pods.go:59] 8 kube-system pods found
	I1205 20:32:12.275296  585025 system_pods.go:61] "coredns-7c65d6cfc9-j2hr2" [9ce413ab-c304-40dd-af68-80f15db0e2ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:32:12.275308  585025 system_pods.go:61] "etcd-no-preload-816185" [ddc20062-02d9-4f9d-a2fb-fa2c7d6aa1cc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:32:12.275319  585025 system_pods.go:61] "kube-apiserver-no-preload-816185" [07ff76f2-b05e-4434-b8f9-448bc200507a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:32:12.275328  585025 system_pods.go:61] "kube-controller-manager-no-preload-816185" [7c701058-791a-4097-a913-f6989a791067] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:32:12.275340  585025 system_pods.go:61] "kube-proxy-rjp4j" [340e9ccc-0290-4d3d-829c-44ad65410f3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:32:12.275348  585025 system_pods.go:61] "kube-scheduler-no-preload-816185" [c2f3b04c-9e3a-4060-a6d0-fb9eb2aa5e55] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 20:32:12.275359  585025 system_pods.go:61] "metrics-server-6867b74b74-vjwq2" [47ff24fe-0edb-4d06-b280-a0d965b25dae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:32:12.275367  585025 system_pods.go:61] "storage-provisioner" [bd385e87-56ea-417c-a4a8-b8a6e4f94114] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:32:12.275376  585025 system_pods.go:74] duration metric: took 13.23725ms to wait for pod list to return data ...
	I1205 20:32:12.275387  585025 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:32:12.279719  585025 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:32:12.279746  585025 node_conditions.go:123] node cpu capacity is 2
	I1205 20:32:12.279755  585025 node_conditions.go:105] duration metric: took 4.364464ms to run NodePressure ...
	I1205 20:32:12.279774  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:12.562221  585025 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 20:32:12.566599  585025 kubeadm.go:739] kubelet initialised
	I1205 20:32:12.566627  585025 kubeadm.go:740] duration metric: took 4.374855ms waiting for restarted kubelet to initialise ...
	I1205 20:32:12.566639  585025 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:32:12.571780  585025 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-j2hr2" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:14.579614  585025 pod_ready.go:103] pod "coredns-7c65d6cfc9-j2hr2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:12.220304  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:12.720348  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:13.219553  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:13.720078  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:14.219614  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:14.719625  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:15.220118  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:15.720577  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:16.220392  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:16.719538  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:13.230519  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:32:13.230567  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:16.061543  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:32:16.061583  585929 api_server.go:103] status: https://192.168.50.96:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:32:16.061603  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:16.078424  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:32:16.078457  585929 api_server.go:103] status: https://192.168.50.96:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:32:16.227852  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:16.553664  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:16.553705  585929 api_server.go:103] status: https://192.168.50.96:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:16.728155  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:16.734800  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:16.734853  585929 api_server.go:103] status: https://192.168.50.96:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:17.228013  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:17.233541  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:17.233577  585929 api_server.go:103] status: https://192.168.50.96:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:17.727878  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:17.736731  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 200:
	ok
	I1205 20:32:17.746474  585929 api_server.go:141] control plane version: v1.31.2
	I1205 20:32:17.746511  585929 api_server.go:131] duration metric: took 41.019245279s to wait for apiserver health ...
	I1205 20:32:17.746523  585929 cni.go:84] Creating CNI manager for ""
	I1205 20:32:17.746531  585929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:32:17.748464  585929 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:32:17.750113  585929 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:32:17.762750  585929 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:32:17.786421  585929 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:32:17.826859  585929 system_pods.go:59] 8 kube-system pods found
	I1205 20:32:17.826918  585929 system_pods.go:61] "coredns-7c65d6cfc9-5drgc" [4adbcbc8-0974-4ed3-90d4-fc7f75ff83b6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:32:17.826934  585929 system_pods.go:61] "etcd-default-k8s-diff-port-942599" [4041a965-abf4-45b3-a180-118601e72573] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:32:17.826946  585929 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-942599" [ae1d7788-4feb-4e02-b0b2-bcaff984ff99] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:32:17.826959  585929 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-942599" [5cfb734e-5a10-4066-95a1-b884817a0aea] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:32:17.826969  585929 system_pods.go:61] "kube-proxy-5vdcq" [be2e18fd-6980-45c9-87a4-f6d1ed31bf7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:32:17.826980  585929 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-942599" [8deda727-a6c3-4523-8755-76217f6a8ddb] Running
	I1205 20:32:17.826989  585929 system_pods.go:61] "metrics-server-6867b74b74-rq8xm" [99b577fd-fbfd-4178-8b06-ef96f118c30b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:32:17.827000  585929 system_pods.go:61] "storage-provisioner" [8a858ec2-dc10-4501-8efa-72e2ea0c7927] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:32:17.827010  585929 system_pods.go:74] duration metric: took 40.565274ms to wait for pod list to return data ...
	I1205 20:32:17.827025  585929 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:32:17.838000  585929 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:32:17.838034  585929 node_conditions.go:123] node cpu capacity is 2
	I1205 20:32:17.838050  585929 node_conditions.go:105] duration metric: took 11.010352ms to run NodePressure ...
	I1205 20:32:17.838075  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:18.215713  585929 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 20:32:18.222162  585929 kubeadm.go:739] kubelet initialised
	I1205 20:32:18.222187  585929 kubeadm.go:740] duration metric: took 6.444578ms waiting for restarted kubelet to initialise ...
	I1205 20:32:18.222199  585929 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:32:18.226988  585929 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:18.235570  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.235600  585929 pod_ready.go:82] duration metric: took 8.582972ms for pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:18.235609  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.235617  585929 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:18.242596  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.242623  585929 pod_ready.go:82] duration metric: took 6.99814ms for pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:18.242634  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.242642  585929 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:18.248351  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.248373  585929 pod_ready.go:82] duration metric: took 5.725371ms for pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:18.248383  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.248390  585929 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:18.258151  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.258174  585929 pod_ready.go:82] duration metric: took 9.778119ms for pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:18.258183  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.258190  585929 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5vdcq" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:18.619579  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "kube-proxy-5vdcq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.619623  585929 pod_ready.go:82] duration metric: took 361.426091ms for pod "kube-proxy-5vdcq" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:18.619638  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "kube-proxy-5vdcq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.619649  585929 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:19.019623  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:19.019655  585929 pod_ready.go:82] duration metric: took 399.997558ms for pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:19.019669  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:19.019676  585929 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:19.420201  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:19.420228  585929 pod_ready.go:82] duration metric: took 400.54576ms for pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:19.420242  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:19.420251  585929 pod_ready.go:39] duration metric: took 1.198040831s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:32:19.420292  585929 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:32:19.434385  585929 ops.go:34] apiserver oom_adj: -16
	I1205 20:32:19.434420  585929 kubeadm.go:597] duration metric: took 45.406934122s to restartPrimaryControlPlane
	I1205 20:32:19.434434  585929 kubeadm.go:394] duration metric: took 45.464483994s to StartCluster
	I1205 20:32:19.434460  585929 settings.go:142] acquiring lock: {Name:mk53b9e6d652790a330d8f10370186624dd74692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:32:19.434560  585929 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:32:19.436299  585929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:32:19.436590  585929 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.96 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:32:19.436736  585929 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 20:32:19.436837  585929 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-942599"
	I1205 20:32:19.436858  585929 config.go:182] Loaded profile config "default-k8s-diff-port-942599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:32:19.436873  585929 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-942599"
	W1205 20:32:19.436883  585929 addons.go:243] addon storage-provisioner should already be in state true
	I1205 20:32:19.436923  585929 host.go:66] Checking if "default-k8s-diff-port-942599" exists ...
	I1205 20:32:19.436938  585929 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-942599"
	I1205 20:32:19.436974  585929 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-942599"
	I1205 20:32:19.436922  585929 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-942599"
	I1205 20:32:19.437024  585929 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-942599"
	W1205 20:32:19.437051  585929 addons.go:243] addon metrics-server should already be in state true
	I1205 20:32:19.437090  585929 host.go:66] Checking if "default-k8s-diff-port-942599" exists ...
	I1205 20:32:19.437365  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.437407  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.437452  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.437480  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.437509  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.437514  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.438584  585929 out.go:177] * Verifying Kubernetes components...
	I1205 20:32:19.440376  585929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:32:19.453761  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I1205 20:32:19.453782  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44087
	I1205 20:32:19.453767  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33855
	I1205 20:32:19.454289  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.454441  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.454451  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.454851  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.454871  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.454981  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.454981  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.455005  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.455021  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.455286  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.455350  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.455409  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.455461  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetState
	I1205 20:32:19.455910  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.455927  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.455958  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.455966  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.458587  585929 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-942599"
	W1205 20:32:19.458605  585929 addons.go:243] addon default-storageclass should already be in state true
	I1205 20:32:19.458627  585929 host.go:66] Checking if "default-k8s-diff-port-942599" exists ...
	I1205 20:32:19.458955  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.458995  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.472175  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37545
	I1205 20:32:19.472667  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.472927  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37223
	I1205 20:32:19.473215  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.473233  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.473401  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40929
	I1205 20:32:19.473570  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.473608  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.473839  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.473933  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetState
	I1205 20:32:19.474155  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.474187  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.474290  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.474313  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.474546  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.474638  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.474711  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetState
	I1205 20:32:19.475267  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.475320  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.476105  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:32:19.476447  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:32:19.478117  585929 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:32:19.478117  585929 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:32:17.545165  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:20.044285  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:17.079986  585025 pod_ready.go:93] pod "coredns-7c65d6cfc9-j2hr2" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:17.080014  585025 pod_ready.go:82] duration metric: took 4.508210865s for pod "coredns-7c65d6cfc9-j2hr2" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:17.080025  585025 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:19.086070  585025 pod_ready.go:103] pod "etcd-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:20.587742  585025 pod_ready.go:93] pod "etcd-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:20.587775  585025 pod_ready.go:82] duration metric: took 3.507742173s for pod "etcd-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:20.587789  585025 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:19.479638  585929 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:32:19.479658  585929 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:32:19.479686  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:32:19.479719  585929 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:32:19.479737  585929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:32:19.479750  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:32:19.483208  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.483350  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.483773  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:32:19.483790  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.483873  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:32:19.483887  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.483936  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:32:19.484123  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:32:19.484166  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:32:19.484294  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:32:19.484324  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:32:19.484438  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:32:19.484456  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:32:19.484571  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:32:19.533651  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34539
	I1205 20:32:19.534273  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.534802  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.534833  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.535282  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.535535  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetState
	I1205 20:32:19.538221  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:32:19.538787  585929 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:32:19.538804  585929 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:32:19.538825  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:32:19.541876  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.542318  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:32:19.542354  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.542556  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:32:19.542744  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:32:19.542944  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:32:19.543129  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:32:19.630282  585929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:32:19.652591  585929 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-942599" to be "Ready" ...
	I1205 20:32:19.719058  585929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:32:19.810931  585929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:32:19.812113  585929 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:32:19.812136  585929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:32:19.875725  585929 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:32:19.875761  585929 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:32:19.946353  585929 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:32:19.946390  585929 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:32:20.010445  585929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:32:20.231055  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:20.231082  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:20.231425  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:20.231454  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:20.231469  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:20.231478  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:20.231476  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Closing plugin on server side
	I1205 20:32:20.231764  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:20.231784  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:20.231783  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Closing plugin on server side
	I1205 20:32:20.247021  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:20.247051  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:20.247463  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:20.247490  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:20.247488  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Closing plugin on server side
	I1205 20:32:21.074948  585929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.263976727s)
	I1205 20:32:21.075015  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:21.075029  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:21.075397  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:21.075438  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:21.075449  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:21.075457  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:21.075745  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Closing plugin on server side
	I1205 20:32:21.075766  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:21.075785  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:21.134215  585929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.123724822s)
	I1205 20:32:21.134271  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:21.134285  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:21.134588  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:21.134604  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:21.134612  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:21.134615  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Closing plugin on server side
	I1205 20:32:21.134620  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:21.134878  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:21.134891  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:21.134904  585929 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-942599"
	I1205 20:32:21.136817  585929 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:32:17.220437  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:17.220539  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:17.272666  585602 cri.go:89] found id: ""
	I1205 20:32:17.272702  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.272716  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:17.272723  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:17.272797  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:17.314947  585602 cri.go:89] found id: ""
	I1205 20:32:17.314977  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.314989  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:17.314996  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:17.315061  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:17.354511  585602 cri.go:89] found id: ""
	I1205 20:32:17.354548  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.354561  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:17.354571  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:17.354640  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:17.393711  585602 cri.go:89] found id: ""
	I1205 20:32:17.393745  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.393759  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:17.393768  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:17.393836  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:17.434493  585602 cri.go:89] found id: ""
	I1205 20:32:17.434526  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.434535  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:17.434541  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:17.434602  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:17.476201  585602 cri.go:89] found id: ""
	I1205 20:32:17.476235  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.476245  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:17.476253  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:17.476341  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:17.516709  585602 cri.go:89] found id: ""
	I1205 20:32:17.516745  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.516755  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:17.516762  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:17.516818  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:17.557270  585602 cri.go:89] found id: ""
	I1205 20:32:17.557305  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.557314  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:17.557324  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:17.557348  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:17.606494  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:17.606540  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:17.681372  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:17.681412  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:17.696778  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:17.696816  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:17.839655  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:17.839679  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:17.839717  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:20.423552  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:20.439794  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:20.439875  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:20.482820  585602 cri.go:89] found id: ""
	I1205 20:32:20.482866  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.482880  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:20.482888  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:20.482958  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:20.523590  585602 cri.go:89] found id: ""
	I1205 20:32:20.523629  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.523641  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:20.523649  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:20.523727  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:20.601603  585602 cri.go:89] found id: ""
	I1205 20:32:20.601638  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.601648  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:20.601656  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:20.601728  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:20.643927  585602 cri.go:89] found id: ""
	I1205 20:32:20.643959  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.643972  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:20.643981  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:20.644054  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:20.690935  585602 cri.go:89] found id: ""
	I1205 20:32:20.690964  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.690975  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:20.690984  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:20.691054  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:20.728367  585602 cri.go:89] found id: ""
	I1205 20:32:20.728400  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.728412  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:20.728420  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:20.728489  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:20.766529  585602 cri.go:89] found id: ""
	I1205 20:32:20.766562  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.766571  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:20.766578  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:20.766657  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:20.805641  585602 cri.go:89] found id: ""
	I1205 20:32:20.805680  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.805690  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:20.805701  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:20.805718  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:20.884460  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:20.884495  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:20.884514  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:20.998367  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:20.998429  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:21.041210  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:21.041247  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:21.103519  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:21.103557  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:21.138175  585929 addons.go:510] duration metric: took 1.701453382s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:32:21.657269  585929 node_ready.go:53] node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:22.541880  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:24.543481  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:22.595422  585025 pod_ready.go:103] pod "kube-apiserver-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:23.594392  585025 pod_ready.go:93] pod "kube-apiserver-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:23.594419  585025 pod_ready.go:82] duration metric: took 3.006622534s for pod "kube-apiserver-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:23.594430  585025 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:25.601616  585025 pod_ready.go:103] pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:23.619187  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:23.633782  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:23.633872  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:23.679994  585602 cri.go:89] found id: ""
	I1205 20:32:23.680023  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.680032  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:23.680038  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:23.680094  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:23.718362  585602 cri.go:89] found id: ""
	I1205 20:32:23.718425  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.718439  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:23.718447  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:23.718520  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:23.758457  585602 cri.go:89] found id: ""
	I1205 20:32:23.758491  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.758500  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:23.758506  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:23.758558  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:23.794612  585602 cri.go:89] found id: ""
	I1205 20:32:23.794649  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.794662  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:23.794671  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:23.794738  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:23.832309  585602 cri.go:89] found id: ""
	I1205 20:32:23.832341  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.832354  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:23.832361  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:23.832421  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:23.868441  585602 cri.go:89] found id: ""
	I1205 20:32:23.868472  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.868484  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:23.868492  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:23.868573  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:23.902996  585602 cri.go:89] found id: ""
	I1205 20:32:23.903025  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.903036  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:23.903050  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:23.903115  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:23.939830  585602 cri.go:89] found id: ""
	I1205 20:32:23.939865  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.939879  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:23.939892  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:23.939909  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:23.992310  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:23.992354  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:24.007378  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:24.007414  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:24.077567  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:24.077594  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:24.077608  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:24.165120  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:24.165163  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:26.711674  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:26.726923  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:26.727008  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:26.763519  585602 cri.go:89] found id: ""
	I1205 20:32:26.763554  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.763563  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:26.763570  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:26.763628  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:26.802600  585602 cri.go:89] found id: ""
	I1205 20:32:26.802635  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.802644  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:26.802650  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:26.802705  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:26.839920  585602 cri.go:89] found id: ""
	I1205 20:32:26.839967  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.839981  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:26.839989  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:26.840076  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:24.157515  585929 node_ready.go:53] node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:26.657197  585929 node_ready.go:53] node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:27.656811  585929 node_ready.go:49] node "default-k8s-diff-port-942599" has status "Ready":"True"
	I1205 20:32:27.656842  585929 node_ready.go:38] duration metric: took 8.004215314s for node "default-k8s-diff-port-942599" to be "Ready" ...
	I1205 20:32:27.656854  585929 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:32:27.662792  585929 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.668485  585929 pod_ready.go:93] pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:27.668510  585929 pod_ready.go:82] duration metric: took 5.690516ms for pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.668521  585929 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:26.543536  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:28.544214  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:27.101514  585025 pod_ready.go:93] pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:27.101540  585025 pod_ready.go:82] duration metric: took 3.507102769s for pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.101551  585025 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rjp4j" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.108084  585025 pod_ready.go:93] pod "kube-proxy-rjp4j" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:27.108116  585025 pod_ready.go:82] duration metric: took 6.557141ms for pod "kube-proxy-rjp4j" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.108131  585025 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.112915  585025 pod_ready.go:93] pod "kube-scheduler-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:27.112942  585025 pod_ready.go:82] duration metric: took 4.801285ms for pod "kube-scheduler-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.112955  585025 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.119094  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:26.876377  585602 cri.go:89] found id: ""
	I1205 20:32:26.876406  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.876416  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:26.876422  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:26.876491  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:26.913817  585602 cri.go:89] found id: ""
	I1205 20:32:26.913845  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.913854  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:26.913862  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:26.913936  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:26.955739  585602 cri.go:89] found id: ""
	I1205 20:32:26.955775  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.955788  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:26.955798  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:26.955863  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:26.996191  585602 cri.go:89] found id: ""
	I1205 20:32:26.996223  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.996234  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:26.996242  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:26.996341  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:27.040905  585602 cri.go:89] found id: ""
	I1205 20:32:27.040935  585602 logs.go:282] 0 containers: []
	W1205 20:32:27.040947  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:27.040958  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:27.040973  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:27.098103  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:27.098140  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:27.116538  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:27.116574  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:27.204154  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:27.204187  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:27.204208  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:27.300380  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:27.300431  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:29.840944  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:29.855784  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:29.855869  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:29.893728  585602 cri.go:89] found id: ""
	I1205 20:32:29.893765  585602 logs.go:282] 0 containers: []
	W1205 20:32:29.893777  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:29.893786  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:29.893867  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:29.930138  585602 cri.go:89] found id: ""
	I1205 20:32:29.930176  585602 logs.go:282] 0 containers: []
	W1205 20:32:29.930186  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:29.930193  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:29.930248  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:29.966340  585602 cri.go:89] found id: ""
	I1205 20:32:29.966371  585602 logs.go:282] 0 containers: []
	W1205 20:32:29.966380  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:29.966387  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:29.966463  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:30.003868  585602 cri.go:89] found id: ""
	I1205 20:32:30.003900  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.003920  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:30.003928  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:30.004001  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:30.044332  585602 cri.go:89] found id: ""
	I1205 20:32:30.044363  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.044373  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:30.044380  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:30.044445  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:30.088044  585602 cri.go:89] found id: ""
	I1205 20:32:30.088085  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.088098  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:30.088106  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:30.088173  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:30.124221  585602 cri.go:89] found id: ""
	I1205 20:32:30.124248  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.124258  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:30.124285  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:30.124357  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:30.162092  585602 cri.go:89] found id: ""
	I1205 20:32:30.162121  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.162133  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:30.162146  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:30.162162  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:30.218526  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:30.218567  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:30.232240  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:30.232292  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:30.308228  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:30.308260  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:30.308296  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:30.389348  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:30.389391  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:29.177093  585929 pod_ready.go:93] pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:29.177118  585929 pod_ready.go:82] duration metric: took 1.508590352s for pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.177129  585929 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.185839  585929 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:29.185869  585929 pod_ready.go:82] duration metric: took 8.733028ms for pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.185883  585929 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.191924  585929 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:29.191950  585929 pod_ready.go:82] duration metric: took 6.059525ms for pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.191963  585929 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5vdcq" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.256484  585929 pod_ready.go:93] pod "kube-proxy-5vdcq" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:29.256510  585929 pod_ready.go:82] duration metric: took 64.540117ms for pod "kube-proxy-5vdcq" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.256521  585929 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.656933  585929 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:29.656961  585929 pod_ready.go:82] duration metric: took 400.432279ms for pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.656972  585929 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:31.664326  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:31.043630  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:33.044035  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:35.542861  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:31.120200  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:33.120303  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:35.120532  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:32.934497  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:32.949404  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:32.949488  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:33.006117  585602 cri.go:89] found id: ""
	I1205 20:32:33.006148  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.006157  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:33.006163  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:33.006231  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:33.064907  585602 cri.go:89] found id: ""
	I1205 20:32:33.064945  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.064958  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:33.064966  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:33.065031  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:33.101268  585602 cri.go:89] found id: ""
	I1205 20:32:33.101295  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.101304  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:33.101310  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:33.101378  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:33.141705  585602 cri.go:89] found id: ""
	I1205 20:32:33.141733  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.141743  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:33.141750  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:33.141810  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:33.180983  585602 cri.go:89] found id: ""
	I1205 20:32:33.181011  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.181020  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:33.181026  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:33.181086  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:33.220742  585602 cri.go:89] found id: ""
	I1205 20:32:33.220779  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.220791  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:33.220799  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:33.220871  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:33.255980  585602 cri.go:89] found id: ""
	I1205 20:32:33.256009  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.256017  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:33.256024  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:33.256080  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:33.292978  585602 cri.go:89] found id: ""
	I1205 20:32:33.293005  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.293013  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:33.293023  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:33.293034  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:33.347167  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:33.347213  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:33.361367  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:33.361408  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:33.435871  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:33.435915  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:33.435932  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:33.518835  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:33.518880  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:36.066359  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:36.080867  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:36.080947  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:36.117647  585602 cri.go:89] found id: ""
	I1205 20:32:36.117678  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.117689  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:36.117697  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:36.117763  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:36.154376  585602 cri.go:89] found id: ""
	I1205 20:32:36.154412  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.154428  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:36.154436  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:36.154498  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:36.193225  585602 cri.go:89] found id: ""
	I1205 20:32:36.193261  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.193274  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:36.193282  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:36.193347  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:36.230717  585602 cri.go:89] found id: ""
	I1205 20:32:36.230748  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.230758  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:36.230764  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:36.230817  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:36.270186  585602 cri.go:89] found id: ""
	I1205 20:32:36.270238  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.270252  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:36.270262  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:36.270340  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:36.306378  585602 cri.go:89] found id: ""
	I1205 20:32:36.306425  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.306438  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:36.306447  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:36.306531  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:36.342256  585602 cri.go:89] found id: ""
	I1205 20:32:36.342289  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.342300  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:36.342306  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:36.342380  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:36.380684  585602 cri.go:89] found id: ""
	I1205 20:32:36.380718  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.380732  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:36.380745  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:36.380768  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:36.436066  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:36.436109  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:36.450255  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:36.450285  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:36.521857  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:36.521883  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:36.521897  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:36.608349  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:36.608395  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:34.163870  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:36.164890  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:38.042889  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:40.543140  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:37.619863  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:40.120462  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:39.157366  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:39.171267  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:39.171357  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:39.214459  585602 cri.go:89] found id: ""
	I1205 20:32:39.214490  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.214520  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:39.214528  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:39.214583  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:39.250312  585602 cri.go:89] found id: ""
	I1205 20:32:39.250352  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.250366  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:39.250375  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:39.250437  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:39.286891  585602 cri.go:89] found id: ""
	I1205 20:32:39.286932  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.286944  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:39.286952  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:39.287019  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:39.323923  585602 cri.go:89] found id: ""
	I1205 20:32:39.323958  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.323970  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:39.323979  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:39.324053  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:39.360280  585602 cri.go:89] found id: ""
	I1205 20:32:39.360322  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.360331  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:39.360337  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:39.360403  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:39.397599  585602 cri.go:89] found id: ""
	I1205 20:32:39.397637  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.397650  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:39.397659  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:39.397731  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:39.435132  585602 cri.go:89] found id: ""
	I1205 20:32:39.435159  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.435168  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:39.435174  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:39.435241  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:39.470653  585602 cri.go:89] found id: ""
	I1205 20:32:39.470682  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.470690  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:39.470700  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:39.470714  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:39.511382  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:39.511413  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:39.563955  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:39.563994  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:39.578015  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:39.578044  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:39.658505  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:39.658535  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:39.658550  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:38.665320  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:41.165054  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:42.545231  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:45.042231  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:42.620687  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:45.120915  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:42.248607  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:42.263605  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:42.263688  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:42.305480  585602 cri.go:89] found id: ""
	I1205 20:32:42.305508  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.305519  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:42.305527  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:42.305595  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:42.339969  585602 cri.go:89] found id: ""
	I1205 20:32:42.340001  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.340010  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:42.340016  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:42.340090  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:42.381594  585602 cri.go:89] found id: ""
	I1205 20:32:42.381630  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.381643  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:42.381651  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:42.381771  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:42.435039  585602 cri.go:89] found id: ""
	I1205 20:32:42.435072  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.435085  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:42.435093  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:42.435162  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:42.470567  585602 cri.go:89] found id: ""
	I1205 20:32:42.470595  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.470604  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:42.470610  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:42.470674  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:42.510695  585602 cri.go:89] found id: ""
	I1205 20:32:42.510723  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.510731  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:42.510738  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:42.510793  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:42.547687  585602 cri.go:89] found id: ""
	I1205 20:32:42.547711  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.547718  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:42.547735  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:42.547784  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:42.587160  585602 cri.go:89] found id: ""
	I1205 20:32:42.587191  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.587199  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:42.587211  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:42.587225  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:42.669543  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:42.669587  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:42.717795  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:42.717833  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:42.772644  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:42.772696  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:42.788443  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:42.788480  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:42.861560  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:45.362758  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:45.377178  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:45.377266  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:45.413055  585602 cri.go:89] found id: ""
	I1205 20:32:45.413088  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.413102  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:45.413111  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:45.413176  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:45.453769  585602 cri.go:89] found id: ""
	I1205 20:32:45.453799  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.453808  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:45.453813  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:45.453879  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:45.499481  585602 cri.go:89] found id: ""
	I1205 20:32:45.499511  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.499522  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:45.499531  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:45.499598  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:45.537603  585602 cri.go:89] found id: ""
	I1205 20:32:45.537638  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.537647  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:45.537653  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:45.537707  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:45.572430  585602 cri.go:89] found id: ""
	I1205 20:32:45.572463  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.572471  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:45.572479  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:45.572556  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:45.610349  585602 cri.go:89] found id: ""
	I1205 20:32:45.610387  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.610398  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:45.610406  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:45.610476  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:45.649983  585602 cri.go:89] found id: ""
	I1205 20:32:45.650018  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.650031  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:45.650038  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:45.650113  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:45.689068  585602 cri.go:89] found id: ""
	I1205 20:32:45.689099  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.689107  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:45.689118  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:45.689131  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:45.743715  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:45.743758  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:45.759803  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:45.759834  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:45.835107  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:45.835133  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:45.835146  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:45.914590  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:45.914632  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:43.665616  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:46.164064  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:47.045269  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:49.544519  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:47.619099  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:49.627948  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:48.456633  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:48.475011  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:48.475086  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:48.512878  585602 cri.go:89] found id: ""
	I1205 20:32:48.512913  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.512925  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:48.512933  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:48.513002  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:48.551708  585602 cri.go:89] found id: ""
	I1205 20:32:48.551737  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.551744  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:48.551751  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:48.551805  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:48.590765  585602 cri.go:89] found id: ""
	I1205 20:32:48.590791  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.590800  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:48.590806  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:48.590859  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:48.629447  585602 cri.go:89] found id: ""
	I1205 20:32:48.629473  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.629481  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:48.629487  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:48.629540  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:48.667299  585602 cri.go:89] found id: ""
	I1205 20:32:48.667329  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.667339  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:48.667347  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:48.667414  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:48.703771  585602 cri.go:89] found id: ""
	I1205 20:32:48.703816  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.703830  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:48.703841  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:48.703911  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:48.747064  585602 cri.go:89] found id: ""
	I1205 20:32:48.747098  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.747111  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:48.747118  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:48.747186  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:48.786608  585602 cri.go:89] found id: ""
	I1205 20:32:48.786649  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.786663  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:48.786684  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:48.786700  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:48.860834  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:48.860866  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:48.860881  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:48.944029  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:48.944082  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:48.982249  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:48.982284  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:49.036460  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:49.036509  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:51.556456  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:51.571498  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:51.571590  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:51.616890  585602 cri.go:89] found id: ""
	I1205 20:32:51.616924  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.616934  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:51.616942  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:51.617008  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:51.660397  585602 cri.go:89] found id: ""
	I1205 20:32:51.660433  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.660445  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:51.660453  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:51.660543  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:51.698943  585602 cri.go:89] found id: ""
	I1205 20:32:51.698973  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.698981  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:51.698988  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:51.699041  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:51.737254  585602 cri.go:89] found id: ""
	I1205 20:32:51.737288  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.737297  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:51.737310  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:51.737366  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:51.775560  585602 cri.go:89] found id: ""
	I1205 20:32:51.775592  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.775600  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:51.775606  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:51.775681  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:51.814314  585602 cri.go:89] found id: ""
	I1205 20:32:51.814370  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.814383  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:51.814393  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:51.814464  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:51.849873  585602 cri.go:89] found id: ""
	I1205 20:32:51.849913  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.849935  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:51.849944  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:51.850018  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:48.164562  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:50.664498  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:52.044224  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:54.542721  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:52.118857  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:54.120231  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:51.891360  585602 cri.go:89] found id: ""
	I1205 20:32:51.891388  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.891400  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:51.891412  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:51.891429  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:51.943812  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:51.943854  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:51.959119  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:51.959152  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:52.036014  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:52.036040  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:52.036059  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:52.114080  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:52.114122  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:54.657243  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:54.672319  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:54.672407  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:54.708446  585602 cri.go:89] found id: ""
	I1205 20:32:54.708475  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.708484  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:54.708491  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:54.708569  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:54.747309  585602 cri.go:89] found id: ""
	I1205 20:32:54.747347  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.747359  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:54.747370  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:54.747451  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:54.790742  585602 cri.go:89] found id: ""
	I1205 20:32:54.790772  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.790781  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:54.790787  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:54.790853  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:54.828857  585602 cri.go:89] found id: ""
	I1205 20:32:54.828885  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.828894  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:54.828902  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:54.828964  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:54.867691  585602 cri.go:89] found id: ""
	I1205 20:32:54.867729  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.867740  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:54.867747  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:54.867819  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:54.907216  585602 cri.go:89] found id: ""
	I1205 20:32:54.907242  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.907249  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:54.907256  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:54.907308  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:54.945800  585602 cri.go:89] found id: ""
	I1205 20:32:54.945827  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.945837  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:54.945844  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:54.945895  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:54.993176  585602 cri.go:89] found id: ""
	I1205 20:32:54.993216  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.993228  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:54.993242  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:54.993258  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:55.045797  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:55.045835  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:55.060103  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:55.060136  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:55.129440  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:55.129467  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:55.129485  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:55.214949  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:55.214999  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:53.164619  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:55.663605  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:56.543148  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:58.543374  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:00.543687  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:56.620220  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:58.620759  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:00.626643  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:57.755086  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:57.769533  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:57.769622  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:57.807812  585602 cri.go:89] found id: ""
	I1205 20:32:57.807847  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.807858  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:57.807869  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:57.807941  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:57.846179  585602 cri.go:89] found id: ""
	I1205 20:32:57.846209  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.846223  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:57.846232  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:57.846305  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:57.881438  585602 cri.go:89] found id: ""
	I1205 20:32:57.881473  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.881482  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:57.881496  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:57.881553  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:57.918242  585602 cri.go:89] found id: ""
	I1205 20:32:57.918283  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.918294  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:57.918302  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:57.918378  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:57.962825  585602 cri.go:89] found id: ""
	I1205 20:32:57.962863  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.962873  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:57.962879  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:57.962955  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:58.004655  585602 cri.go:89] found id: ""
	I1205 20:32:58.004699  585602 logs.go:282] 0 containers: []
	W1205 20:32:58.004711  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:58.004731  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:58.004802  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:58.043701  585602 cri.go:89] found id: ""
	I1205 20:32:58.043730  585602 logs.go:282] 0 containers: []
	W1205 20:32:58.043738  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:58.043744  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:58.043802  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:58.081400  585602 cri.go:89] found id: ""
	I1205 20:32:58.081437  585602 logs.go:282] 0 containers: []
	W1205 20:32:58.081450  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:58.081463  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:58.081486  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:58.135531  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:58.135573  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:58.149962  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:58.149998  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:58.227810  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:58.227834  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:58.227849  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:58.308173  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:58.308219  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:00.848019  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:00.863423  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:00.863496  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:00.902526  585602 cri.go:89] found id: ""
	I1205 20:33:00.902553  585602 logs.go:282] 0 containers: []
	W1205 20:33:00.902561  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:00.902567  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:00.902621  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:00.939891  585602 cri.go:89] found id: ""
	I1205 20:33:00.939932  585602 logs.go:282] 0 containers: []
	W1205 20:33:00.939942  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:00.939948  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:00.940022  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:00.981645  585602 cri.go:89] found id: ""
	I1205 20:33:00.981676  585602 logs.go:282] 0 containers: []
	W1205 20:33:00.981684  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:00.981691  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:00.981745  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:01.027753  585602 cri.go:89] found id: ""
	I1205 20:33:01.027780  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.027789  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:01.027795  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:01.027877  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:01.064529  585602 cri.go:89] found id: ""
	I1205 20:33:01.064559  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.064567  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:01.064574  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:01.064628  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:01.102239  585602 cri.go:89] found id: ""
	I1205 20:33:01.102272  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.102281  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:01.102287  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:01.102357  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:01.139723  585602 cri.go:89] found id: ""
	I1205 20:33:01.139760  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.139770  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:01.139778  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:01.139845  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:01.176172  585602 cri.go:89] found id: ""
	I1205 20:33:01.176198  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.176207  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:01.176216  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:01.176231  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:01.230085  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:01.230133  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:01.245574  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:01.245617  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:01.340483  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:01.340520  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:01.340537  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:01.416925  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:01.416972  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:58.164852  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:00.664376  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:02.677134  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:03.042415  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:05.543101  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:03.119783  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:05.120647  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:03.958855  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:03.974024  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:03.974096  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:04.021407  585602 cri.go:89] found id: ""
	I1205 20:33:04.021442  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.021451  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:04.021458  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:04.021523  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:04.063385  585602 cri.go:89] found id: ""
	I1205 20:33:04.063414  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.063423  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:04.063430  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:04.063488  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:04.103693  585602 cri.go:89] found id: ""
	I1205 20:33:04.103735  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.103747  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:04.103756  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:04.103815  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:04.143041  585602 cri.go:89] found id: ""
	I1205 20:33:04.143072  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.143100  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:04.143109  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:04.143179  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:04.180668  585602 cri.go:89] found id: ""
	I1205 20:33:04.180702  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.180712  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:04.180718  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:04.180778  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:04.221848  585602 cri.go:89] found id: ""
	I1205 20:33:04.221885  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.221894  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:04.221901  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:04.222018  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:04.263976  585602 cri.go:89] found id: ""
	I1205 20:33:04.264014  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.264024  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:04.264030  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:04.264097  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:04.298698  585602 cri.go:89] found id: ""
	I1205 20:33:04.298726  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.298737  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:04.298751  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:04.298767  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:04.347604  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:04.347659  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:04.361325  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:04.361361  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:04.437679  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:04.437704  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:04.437720  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:04.520043  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:04.520103  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:05.163317  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:07.165936  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:08.043365  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:10.544442  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:07.122134  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:09.620228  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:07.070687  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:07.085290  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:07.085367  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:07.126233  585602 cri.go:89] found id: ""
	I1205 20:33:07.126265  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.126276  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:07.126285  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:07.126346  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:07.163004  585602 cri.go:89] found id: ""
	I1205 20:33:07.163040  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.163053  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:07.163061  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:07.163126  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:07.201372  585602 cri.go:89] found id: ""
	I1205 20:33:07.201412  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.201425  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:07.201435  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:07.201509  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:07.237762  585602 cri.go:89] found id: ""
	I1205 20:33:07.237795  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.237807  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:07.237815  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:07.237885  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:07.273940  585602 cri.go:89] found id: ""
	I1205 20:33:07.273976  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.273985  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:07.273995  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:07.274057  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:07.311028  585602 cri.go:89] found id: ""
	I1205 20:33:07.311061  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.311070  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:07.311076  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:07.311131  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:07.347386  585602 cri.go:89] found id: ""
	I1205 20:33:07.347422  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.347433  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:07.347441  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:07.347503  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:07.386412  585602 cri.go:89] found id: ""
	I1205 20:33:07.386446  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.386458  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:07.386471  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:07.386489  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:07.430250  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:07.430280  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:07.483936  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:07.483982  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:07.498201  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:07.498236  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:07.576741  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:07.576767  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:07.576780  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:10.164792  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:10.178516  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:10.178596  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:10.215658  585602 cri.go:89] found id: ""
	I1205 20:33:10.215692  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.215702  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:10.215711  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:10.215779  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:10.251632  585602 cri.go:89] found id: ""
	I1205 20:33:10.251671  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.251683  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:10.251691  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:10.251763  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:10.295403  585602 cri.go:89] found id: ""
	I1205 20:33:10.295435  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.295453  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:10.295460  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:10.295513  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:10.329747  585602 cri.go:89] found id: ""
	I1205 20:33:10.329778  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.329787  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:10.329793  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:10.329871  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:10.369975  585602 cri.go:89] found id: ""
	I1205 20:33:10.370016  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.370028  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:10.370036  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:10.370104  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:10.408146  585602 cri.go:89] found id: ""
	I1205 20:33:10.408183  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.408196  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:10.408204  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:10.408288  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:10.443803  585602 cri.go:89] found id: ""
	I1205 20:33:10.443839  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.443850  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:10.443858  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:10.443932  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:10.481784  585602 cri.go:89] found id: ""
	I1205 20:33:10.481826  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.481840  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:10.481854  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:10.481872  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:10.531449  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:10.531498  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:10.549258  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:10.549288  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:10.620162  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:10.620189  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:10.620206  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:10.704656  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:10.704706  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:09.663940  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:12.163534  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:13.043720  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:15.542736  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:12.118781  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:14.619996  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:13.251518  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:13.264731  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:13.264815  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:13.297816  585602 cri.go:89] found id: ""
	I1205 20:33:13.297846  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.297855  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:13.297861  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:13.297918  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:13.330696  585602 cri.go:89] found id: ""
	I1205 20:33:13.330724  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.330732  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:13.330738  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:13.330789  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:13.366257  585602 cri.go:89] found id: ""
	I1205 20:33:13.366304  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.366315  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:13.366321  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:13.366385  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:13.403994  585602 cri.go:89] found id: ""
	I1205 20:33:13.404030  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.404042  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:13.404051  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:13.404121  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:13.450160  585602 cri.go:89] found id: ""
	I1205 20:33:13.450189  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.450198  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:13.450205  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:13.450262  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:13.502593  585602 cri.go:89] found id: ""
	I1205 20:33:13.502629  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.502640  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:13.502650  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:13.502720  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:13.548051  585602 cri.go:89] found id: ""
	I1205 20:33:13.548084  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.548095  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:13.548103  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:13.548166  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:13.593913  585602 cri.go:89] found id: ""
	I1205 20:33:13.593947  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.593960  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:13.593975  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:13.593997  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:13.674597  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:13.674628  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:13.674647  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:13.760747  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:13.760796  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:13.804351  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:13.804383  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:13.856896  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:13.856958  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:16.372754  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:16.387165  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:16.387242  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:16.426612  585602 cri.go:89] found id: ""
	I1205 20:33:16.426655  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.426668  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:16.426676  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:16.426734  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:16.461936  585602 cri.go:89] found id: ""
	I1205 20:33:16.461974  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.461988  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:16.461997  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:16.462060  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:16.498010  585602 cri.go:89] found id: ""
	I1205 20:33:16.498044  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.498062  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:16.498069  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:16.498133  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:16.533825  585602 cri.go:89] found id: ""
	I1205 20:33:16.533854  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.533863  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:16.533869  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:16.533941  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:16.570834  585602 cri.go:89] found id: ""
	I1205 20:33:16.570875  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.570887  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:16.570896  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:16.570968  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:16.605988  585602 cri.go:89] found id: ""
	I1205 20:33:16.606026  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.606038  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:16.606047  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:16.606140  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:16.645148  585602 cri.go:89] found id: ""
	I1205 20:33:16.645178  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.645188  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:16.645195  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:16.645261  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:16.682449  585602 cri.go:89] found id: ""
	I1205 20:33:16.682479  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.682491  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:16.682502  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:16.682519  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:16.696944  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:16.696980  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:16.777034  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:16.777064  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:16.777078  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:14.164550  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:16.664527  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:17.543278  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:19.543404  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:16.621517  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:18.626303  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:16.854812  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:16.854880  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:16.905101  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:16.905131  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:19.463427  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:19.477135  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:19.477233  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:19.529213  585602 cri.go:89] found id: ""
	I1205 20:33:19.529248  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.529264  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:19.529274  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:19.529359  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:19.575419  585602 cri.go:89] found id: ""
	I1205 20:33:19.575453  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.575465  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:19.575474  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:19.575546  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:19.616657  585602 cri.go:89] found id: ""
	I1205 20:33:19.616691  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.616704  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:19.616713  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:19.616787  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:19.653142  585602 cri.go:89] found id: ""
	I1205 20:33:19.653177  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.653189  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:19.653198  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:19.653267  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:19.690504  585602 cri.go:89] found id: ""
	I1205 20:33:19.690544  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.690555  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:19.690563  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:19.690635  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:19.730202  585602 cri.go:89] found id: ""
	I1205 20:33:19.730229  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.730237  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:19.730245  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:19.730302  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:19.767212  585602 cri.go:89] found id: ""
	I1205 20:33:19.767243  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.767255  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:19.767264  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:19.767336  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:19.803089  585602 cri.go:89] found id: ""
	I1205 20:33:19.803125  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.803137  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:19.803163  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:19.803180  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:19.884542  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:19.884589  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:19.925257  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:19.925303  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:19.980457  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:19.980510  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:19.997026  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:19.997057  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:20.075062  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:18.664915  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:21.163064  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:22.042272  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:24.043822  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:21.120054  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:23.120944  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:25.618857  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:22.575469  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:22.588686  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:22.588768  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:22.622824  585602 cri.go:89] found id: ""
	I1205 20:33:22.622860  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.622868  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:22.622874  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:22.622931  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:22.659964  585602 cri.go:89] found id: ""
	I1205 20:33:22.660059  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.660074  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:22.660085  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:22.660153  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:22.695289  585602 cri.go:89] found id: ""
	I1205 20:33:22.695325  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.695337  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:22.695345  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:22.695417  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:22.734766  585602 cri.go:89] found id: ""
	I1205 20:33:22.734801  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.734813  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:22.734821  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:22.734896  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:22.773778  585602 cri.go:89] found id: ""
	I1205 20:33:22.773806  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.773818  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:22.773826  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:22.773899  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:22.811468  585602 cri.go:89] found id: ""
	I1205 20:33:22.811503  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.811514  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:22.811521  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:22.811591  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:22.852153  585602 cri.go:89] found id: ""
	I1205 20:33:22.852210  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.852221  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:22.852227  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:22.852318  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:22.888091  585602 cri.go:89] found id: ""
	I1205 20:33:22.888120  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.888129  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:22.888139  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:22.888155  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:22.943210  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:22.943252  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:22.958356  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:22.958393  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:23.026732  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:23.026770  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:23.026788  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:23.106356  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:23.106395  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:25.650832  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:25.665392  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:25.665475  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:25.701109  585602 cri.go:89] found id: ""
	I1205 20:33:25.701146  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.701155  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:25.701162  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:25.701231  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:25.738075  585602 cri.go:89] found id: ""
	I1205 20:33:25.738108  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.738117  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:25.738123  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:25.738176  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:25.775031  585602 cri.go:89] found id: ""
	I1205 20:33:25.775078  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.775090  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:25.775100  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:25.775173  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:25.811343  585602 cri.go:89] found id: ""
	I1205 20:33:25.811376  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.811386  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:25.811395  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:25.811471  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:25.846635  585602 cri.go:89] found id: ""
	I1205 20:33:25.846674  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.846684  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:25.846692  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:25.846766  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:25.881103  585602 cri.go:89] found id: ""
	I1205 20:33:25.881136  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.881145  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:25.881151  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:25.881224  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:25.917809  585602 cri.go:89] found id: ""
	I1205 20:33:25.917844  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.917855  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:25.917864  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:25.917936  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:25.955219  585602 cri.go:89] found id: ""
	I1205 20:33:25.955245  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.955254  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:25.955264  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:25.955276  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:26.007016  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:26.007059  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:26.021554  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:26.021601  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:26.099290  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:26.099321  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:26.099334  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:26.182955  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:26.182993  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:23.164876  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:25.665151  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:26.542519  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:28.542856  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:30.542941  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:27.621687  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:30.119140  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:28.725201  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:28.739515  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:28.739602  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:28.778187  585602 cri.go:89] found id: ""
	I1205 20:33:28.778230  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.778242  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:28.778249  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:28.778315  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:28.815788  585602 cri.go:89] found id: ""
	I1205 20:33:28.815826  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.815838  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:28.815845  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:28.815912  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:28.852222  585602 cri.go:89] found id: ""
	I1205 20:33:28.852251  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.852261  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:28.852289  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:28.852362  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:28.889742  585602 cri.go:89] found id: ""
	I1205 20:33:28.889776  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.889787  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:28.889794  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:28.889859  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:28.926872  585602 cri.go:89] found id: ""
	I1205 20:33:28.926903  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.926912  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:28.926919  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:28.926972  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:28.963380  585602 cri.go:89] found id: ""
	I1205 20:33:28.963418  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.963432  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:28.963441  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:28.963509  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:29.000711  585602 cri.go:89] found id: ""
	I1205 20:33:29.000746  585602 logs.go:282] 0 containers: []
	W1205 20:33:29.000764  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:29.000772  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:29.000848  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:29.035934  585602 cri.go:89] found id: ""
	I1205 20:33:29.035963  585602 logs.go:282] 0 containers: []
	W1205 20:33:29.035974  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:29.035987  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:29.036003  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:29.091336  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:29.091382  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:29.105784  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:29.105814  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:29.182038  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:29.182078  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:29.182095  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:29.261107  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:29.261153  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:31.802911  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:31.817285  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:31.817369  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:28.164470  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:30.664154  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:33.043654  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:35.044730  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:32.120759  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:34.619618  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:31.854865  585602 cri.go:89] found id: ""
	I1205 20:33:31.854900  585602 logs.go:282] 0 containers: []
	W1205 20:33:31.854914  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:31.854922  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:31.854995  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:31.893928  585602 cri.go:89] found id: ""
	I1205 20:33:31.893964  585602 logs.go:282] 0 containers: []
	W1205 20:33:31.893977  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:31.893984  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:31.894053  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:31.929490  585602 cri.go:89] found id: ""
	I1205 20:33:31.929527  585602 logs.go:282] 0 containers: []
	W1205 20:33:31.929540  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:31.929548  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:31.929637  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:31.964185  585602 cri.go:89] found id: ""
	I1205 20:33:31.964211  585602 logs.go:282] 0 containers: []
	W1205 20:33:31.964219  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:31.964225  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:31.964291  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:32.002708  585602 cri.go:89] found id: ""
	I1205 20:33:32.002748  585602 logs.go:282] 0 containers: []
	W1205 20:33:32.002760  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:32.002768  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:32.002847  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:32.040619  585602 cri.go:89] found id: ""
	I1205 20:33:32.040712  585602 logs.go:282] 0 containers: []
	W1205 20:33:32.040740  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:32.040758  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:32.040839  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:32.079352  585602 cri.go:89] found id: ""
	I1205 20:33:32.079390  585602 logs.go:282] 0 containers: []
	W1205 20:33:32.079404  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:32.079412  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:32.079484  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:32.117560  585602 cri.go:89] found id: ""
	I1205 20:33:32.117596  585602 logs.go:282] 0 containers: []
	W1205 20:33:32.117608  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:32.117629  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:32.117653  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:32.172639  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:32.172686  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:32.187687  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:32.187727  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:32.265000  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:32.265034  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:32.265051  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:32.348128  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:32.348176  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:34.890144  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:34.903953  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:34.904032  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:34.939343  585602 cri.go:89] found id: ""
	I1205 20:33:34.939374  585602 logs.go:282] 0 containers: []
	W1205 20:33:34.939383  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:34.939389  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:34.939444  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:34.978225  585602 cri.go:89] found id: ""
	I1205 20:33:34.978266  585602 logs.go:282] 0 containers: []
	W1205 20:33:34.978278  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:34.978286  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:34.978363  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:35.015918  585602 cri.go:89] found id: ""
	I1205 20:33:35.015950  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.015960  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:35.015966  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:35.016032  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:35.053222  585602 cri.go:89] found id: ""
	I1205 20:33:35.053249  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.053257  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:35.053264  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:35.053320  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:35.088369  585602 cri.go:89] found id: ""
	I1205 20:33:35.088401  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.088412  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:35.088421  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:35.088498  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:35.135290  585602 cri.go:89] found id: ""
	I1205 20:33:35.135327  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.135338  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:35.135346  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:35.135412  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:35.174959  585602 cri.go:89] found id: ""
	I1205 20:33:35.174996  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.175008  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:35.175017  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:35.175097  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:35.215101  585602 cri.go:89] found id: ""
	I1205 20:33:35.215134  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.215143  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:35.215152  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:35.215167  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:35.269372  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:35.269414  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:35.285745  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:35.285776  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:35.364774  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:35.364807  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:35.364824  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:35.445932  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:35.445980  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:33.163790  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:35.163966  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:37.164819  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:37.047128  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:39.543051  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:36.620450  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:39.120055  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:37.996837  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:38.010545  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:38.010612  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:38.048292  585602 cri.go:89] found id: ""
	I1205 20:33:38.048334  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.048350  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:38.048360  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:38.048429  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:38.086877  585602 cri.go:89] found id: ""
	I1205 20:33:38.086911  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.086921  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:38.086927  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:38.087001  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:38.122968  585602 cri.go:89] found id: ""
	I1205 20:33:38.122999  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.123010  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:38.123018  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:38.123082  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:38.164901  585602 cri.go:89] found id: ""
	I1205 20:33:38.164940  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.164949  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:38.164955  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:38.165006  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:38.200697  585602 cri.go:89] found id: ""
	I1205 20:33:38.200725  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.200734  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:38.200740  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:38.200803  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:38.240306  585602 cri.go:89] found id: ""
	I1205 20:33:38.240338  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.240347  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:38.240354  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:38.240424  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:38.275788  585602 cri.go:89] found id: ""
	I1205 20:33:38.275823  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.275835  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:38.275844  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:38.275917  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:38.311431  585602 cri.go:89] found id: ""
	I1205 20:33:38.311468  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.311480  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:38.311493  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:38.311507  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:38.361472  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:38.361515  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:38.375970  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:38.376004  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:38.450913  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:38.450941  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:38.450961  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:38.527620  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:38.527666  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:41.072438  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:41.086085  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:41.086168  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:41.123822  585602 cri.go:89] found id: ""
	I1205 20:33:41.123852  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.123861  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:41.123868  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:41.123919  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:41.160343  585602 cri.go:89] found id: ""
	I1205 20:33:41.160371  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.160380  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:41.160389  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:41.160457  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:41.198212  585602 cri.go:89] found id: ""
	I1205 20:33:41.198240  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.198249  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:41.198255  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:41.198309  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:41.233793  585602 cri.go:89] found id: ""
	I1205 20:33:41.233824  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.233832  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:41.233838  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:41.233890  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:41.269397  585602 cri.go:89] found id: ""
	I1205 20:33:41.269435  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.269447  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:41.269457  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:41.269529  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:41.303079  585602 cri.go:89] found id: ""
	I1205 20:33:41.303116  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.303128  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:41.303136  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:41.303196  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:41.337784  585602 cri.go:89] found id: ""
	I1205 20:33:41.337817  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.337826  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:41.337832  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:41.337901  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:41.371410  585602 cri.go:89] found id: ""
	I1205 20:33:41.371438  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.371446  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:41.371456  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:41.371467  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:41.422768  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:41.422807  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:41.437427  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:41.437461  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:41.510875  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:41.510898  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:41.510915  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:41.590783  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:41.590826  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:39.667344  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:42.172287  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:42.043022  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:44.543222  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:41.120670  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:43.622132  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:45.623483  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:44.136390  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:44.149935  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:44.150006  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:44.187807  585602 cri.go:89] found id: ""
	I1205 20:33:44.187846  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.187858  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:44.187866  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:44.187933  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:44.224937  585602 cri.go:89] found id: ""
	I1205 20:33:44.224965  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.224973  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:44.224978  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:44.225040  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:44.260230  585602 cri.go:89] found id: ""
	I1205 20:33:44.260274  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.260287  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:44.260297  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:44.260439  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:44.296410  585602 cri.go:89] found id: ""
	I1205 20:33:44.296439  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.296449  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:44.296455  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:44.296507  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:44.332574  585602 cri.go:89] found id: ""
	I1205 20:33:44.332623  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.332635  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:44.332642  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:44.332709  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:44.368925  585602 cri.go:89] found id: ""
	I1205 20:33:44.368973  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.368985  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:44.368994  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:44.369068  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:44.410041  585602 cri.go:89] found id: ""
	I1205 20:33:44.410075  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.410088  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:44.410095  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:44.410165  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:44.454254  585602 cri.go:89] found id: ""
	I1205 20:33:44.454295  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.454316  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:44.454330  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:44.454346  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:44.507604  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:44.507669  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:44.525172  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:44.525219  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:44.599417  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:44.599446  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:44.599465  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:44.681624  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:44.681685  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:44.664942  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:47.163452  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:47.043225  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:49.044675  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:48.120302  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:50.120568  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:47.230092  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:47.243979  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:47.244076  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:47.280346  585602 cri.go:89] found id: ""
	I1205 20:33:47.280376  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.280385  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:47.280392  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:47.280448  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:47.316454  585602 cri.go:89] found id: ""
	I1205 20:33:47.316479  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.316487  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:47.316493  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:47.316546  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:47.353339  585602 cri.go:89] found id: ""
	I1205 20:33:47.353374  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.353386  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:47.353395  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:47.353466  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:47.388256  585602 cri.go:89] found id: ""
	I1205 20:33:47.388319  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.388330  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:47.388339  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:47.388408  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:47.424907  585602 cri.go:89] found id: ""
	I1205 20:33:47.424942  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.424953  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:47.424961  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:47.425035  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:47.461386  585602 cri.go:89] found id: ""
	I1205 20:33:47.461416  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.461425  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:47.461431  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:47.461485  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:47.501092  585602 cri.go:89] found id: ""
	I1205 20:33:47.501121  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.501130  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:47.501136  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:47.501189  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:47.559478  585602 cri.go:89] found id: ""
	I1205 20:33:47.559507  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.559520  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:47.559533  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:47.559551  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:47.609761  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:47.609800  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:47.626579  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:47.626606  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:47.713490  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:47.713520  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:47.713540  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:47.795346  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:47.795398  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:50.339441  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:50.353134  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:50.353216  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:50.393950  585602 cri.go:89] found id: ""
	I1205 20:33:50.393979  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.393990  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:50.394007  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:50.394074  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:50.431166  585602 cri.go:89] found id: ""
	I1205 20:33:50.431201  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.431212  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:50.431221  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:50.431291  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:50.472641  585602 cri.go:89] found id: ""
	I1205 20:33:50.472674  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.472684  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:50.472692  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:50.472763  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:50.512111  585602 cri.go:89] found id: ""
	I1205 20:33:50.512152  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.512165  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:50.512173  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:50.512247  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:50.554500  585602 cri.go:89] found id: ""
	I1205 20:33:50.554536  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.554549  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:50.554558  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:50.554625  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:50.590724  585602 cri.go:89] found id: ""
	I1205 20:33:50.590755  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.590764  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:50.590771  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:50.590837  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:50.628640  585602 cri.go:89] found id: ""
	I1205 20:33:50.628666  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.628675  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:50.628681  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:50.628732  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:50.670009  585602 cri.go:89] found id: ""
	I1205 20:33:50.670039  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.670047  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:50.670063  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:50.670075  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:50.684236  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:50.684290  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:50.757761  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:50.757790  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:50.757813  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:50.839665  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:50.839720  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:50.881087  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:50.881122  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:49.164986  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:51.665655  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:51.543286  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:53.543689  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:52.621297  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:54.621764  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:53.433345  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:53.446747  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:53.446819  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:53.482928  585602 cri.go:89] found id: ""
	I1205 20:33:53.482967  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.482979  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:53.482988  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:53.483048  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:53.519096  585602 cri.go:89] found id: ""
	I1205 20:33:53.519128  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.519136  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:53.519142  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:53.519196  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:53.556207  585602 cri.go:89] found id: ""
	I1205 20:33:53.556233  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.556243  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:53.556249  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:53.556346  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:53.589708  585602 cri.go:89] found id: ""
	I1205 20:33:53.589736  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.589745  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:53.589758  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:53.589813  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:53.630344  585602 cri.go:89] found id: ""
	I1205 20:33:53.630371  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.630380  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:53.630386  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:53.630438  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:53.668895  585602 cri.go:89] found id: ""
	I1205 20:33:53.668921  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.668929  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:53.668935  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:53.668987  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:53.706601  585602 cri.go:89] found id: ""
	I1205 20:33:53.706628  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.706638  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:53.706644  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:53.706704  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:53.744922  585602 cri.go:89] found id: ""
	I1205 20:33:53.744952  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.744960  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:53.744970  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:53.744989  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:53.823816  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:53.823853  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:53.823928  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:53.905075  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:53.905118  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:53.955424  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:53.955468  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:54.014871  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:54.014916  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:56.537142  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:56.550409  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:56.550478  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:56.587148  585602 cri.go:89] found id: ""
	I1205 20:33:56.587174  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.587184  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:56.587190  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:56.587249  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:56.625153  585602 cri.go:89] found id: ""
	I1205 20:33:56.625180  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.625188  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:56.625193  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:56.625243  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:56.671545  585602 cri.go:89] found id: ""
	I1205 20:33:56.671573  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.671582  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:56.671589  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:56.671652  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:56.712760  585602 cri.go:89] found id: ""
	I1205 20:33:56.712797  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.712810  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:56.712818  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:56.712890  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:56.751219  585602 cri.go:89] found id: ""
	I1205 20:33:56.751254  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.751266  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:56.751274  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:56.751340  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:56.787946  585602 cri.go:89] found id: ""
	I1205 20:33:56.787985  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.787998  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:56.788007  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:56.788101  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:56.823057  585602 cri.go:89] found id: ""
	I1205 20:33:56.823095  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.823108  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:56.823114  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:56.823170  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:54.164074  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:56.165063  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:56.043193  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:58.044158  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:00.542798  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:56.624407  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:59.119743  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:56.860358  585602 cri.go:89] found id: ""
	I1205 20:33:56.860396  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.860408  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:56.860421  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:56.860438  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:56.912954  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:56.912996  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:56.927642  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:56.927691  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:57.007316  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:57.007344  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:57.007359  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:57.091471  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:57.091522  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:59.642150  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:59.656240  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:59.656324  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:59.695918  585602 cri.go:89] found id: ""
	I1205 20:33:59.695954  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.695965  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:59.695973  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:59.696037  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:59.744218  585602 cri.go:89] found id: ""
	I1205 20:33:59.744250  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.744260  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:59.744278  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:59.744340  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:59.799035  585602 cri.go:89] found id: ""
	I1205 20:33:59.799081  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.799094  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:59.799102  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:59.799172  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:59.850464  585602 cri.go:89] found id: ""
	I1205 20:33:59.850505  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.850517  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:59.850526  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:59.850590  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:59.886441  585602 cri.go:89] found id: ""
	I1205 20:33:59.886477  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.886489  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:59.886497  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:59.886564  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:59.926689  585602 cri.go:89] found id: ""
	I1205 20:33:59.926728  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.926741  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:59.926751  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:59.926821  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:59.962615  585602 cri.go:89] found id: ""
	I1205 20:33:59.962644  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.962653  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:59.962659  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:59.962716  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:00.001852  585602 cri.go:89] found id: ""
	I1205 20:34:00.001878  585602 logs.go:282] 0 containers: []
	W1205 20:34:00.001886  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:00.001897  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:00.001913  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:00.055465  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:00.055508  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:00.071904  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:00.071941  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:00.151225  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:00.151248  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:00.151262  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:00.233869  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:00.233914  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:58.664773  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:00.664948  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:02.543019  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:04.543810  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:01.120136  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:03.120824  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:05.620283  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:02.776751  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:02.790868  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:02.790945  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:02.834686  585602 cri.go:89] found id: ""
	I1205 20:34:02.834719  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.834731  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:02.834740  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:02.834823  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:02.871280  585602 cri.go:89] found id: ""
	I1205 20:34:02.871313  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.871333  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:02.871342  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:02.871413  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:02.907300  585602 cri.go:89] found id: ""
	I1205 20:34:02.907336  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.907346  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:02.907352  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:02.907406  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:02.945453  585602 cri.go:89] found id: ""
	I1205 20:34:02.945487  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.945499  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:02.945511  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:02.945587  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:02.980528  585602 cri.go:89] found id: ""
	I1205 20:34:02.980561  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.980573  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:02.980580  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:02.980653  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:03.016919  585602 cri.go:89] found id: ""
	I1205 20:34:03.016946  585602 logs.go:282] 0 containers: []
	W1205 20:34:03.016955  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:03.016961  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:03.017012  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:03.053541  585602 cri.go:89] found id: ""
	I1205 20:34:03.053575  585602 logs.go:282] 0 containers: []
	W1205 20:34:03.053588  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:03.053596  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:03.053655  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:03.089907  585602 cri.go:89] found id: ""
	I1205 20:34:03.089946  585602 logs.go:282] 0 containers: []
	W1205 20:34:03.089959  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:03.089974  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:03.089991  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:03.144663  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:03.144700  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:03.160101  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:03.160140  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:03.231559  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:03.231583  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:03.231600  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:03.313226  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:03.313271  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:05.855538  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:05.869019  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:05.869120  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:05.906879  585602 cri.go:89] found id: ""
	I1205 20:34:05.906910  585602 logs.go:282] 0 containers: []
	W1205 20:34:05.906921  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:05.906928  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:05.906994  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:05.946846  585602 cri.go:89] found id: ""
	I1205 20:34:05.946881  585602 logs.go:282] 0 containers: []
	W1205 20:34:05.946893  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:05.946900  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:05.946968  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:05.984067  585602 cri.go:89] found id: ""
	I1205 20:34:05.984104  585602 logs.go:282] 0 containers: []
	W1205 20:34:05.984118  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:05.984127  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:05.984193  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:06.024984  585602 cri.go:89] found id: ""
	I1205 20:34:06.025014  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.025023  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:06.025029  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:06.025091  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:06.064766  585602 cri.go:89] found id: ""
	I1205 20:34:06.064794  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.064806  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:06.064821  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:06.064877  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:06.105652  585602 cri.go:89] found id: ""
	I1205 20:34:06.105683  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.105691  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:06.105698  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:06.105748  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:06.143732  585602 cri.go:89] found id: ""
	I1205 20:34:06.143762  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.143773  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:06.143781  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:06.143857  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:06.183397  585602 cri.go:89] found id: ""
	I1205 20:34:06.183429  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.183439  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:06.183449  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:06.183462  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:06.236403  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:06.236449  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:06.250728  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:06.250759  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:06.320983  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:06.321009  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:06.321025  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:06.408037  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:06.408084  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:03.164354  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:05.665345  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:07.044218  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:09.543580  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:08.119532  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:10.119918  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:08.955959  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:08.968956  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:08.969037  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:09.002804  585602 cri.go:89] found id: ""
	I1205 20:34:09.002846  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.002859  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:09.002866  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:09.002935  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:09.039098  585602 cri.go:89] found id: ""
	I1205 20:34:09.039191  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.039210  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:09.039220  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:09.039291  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:09.074727  585602 cri.go:89] found id: ""
	I1205 20:34:09.074764  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.074776  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:09.074792  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:09.074861  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:09.112650  585602 cri.go:89] found id: ""
	I1205 20:34:09.112682  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.112692  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:09.112698  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:09.112754  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:09.149301  585602 cri.go:89] found id: ""
	I1205 20:34:09.149346  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.149359  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:09.149368  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:09.149432  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:09.190288  585602 cri.go:89] found id: ""
	I1205 20:34:09.190317  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.190329  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:09.190338  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:09.190404  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:09.225311  585602 cri.go:89] found id: ""
	I1205 20:34:09.225348  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.225361  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:09.225369  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:09.225435  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:09.261023  585602 cri.go:89] found id: ""
	I1205 20:34:09.261052  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.261063  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:09.261075  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:09.261092  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:09.313733  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:09.313785  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:09.329567  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:09.329619  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:09.403397  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:09.403430  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:09.403447  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:09.486586  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:09.486630  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:08.163730  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:10.663603  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:12.665663  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:11.544538  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:14.042854  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:12.120629  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:14.621977  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:12.028110  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:12.041802  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:12.041866  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:12.080349  585602 cri.go:89] found id: ""
	I1205 20:34:12.080388  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.080402  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:12.080410  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:12.080475  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:12.121455  585602 cri.go:89] found id: ""
	I1205 20:34:12.121486  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.121499  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:12.121507  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:12.121567  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:12.157743  585602 cri.go:89] found id: ""
	I1205 20:34:12.157768  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.157785  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:12.157794  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:12.157855  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:12.196901  585602 cri.go:89] found id: ""
	I1205 20:34:12.196933  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.196946  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:12.196954  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:12.197024  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:12.234471  585602 cri.go:89] found id: ""
	I1205 20:34:12.234500  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.234508  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:12.234516  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:12.234585  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:12.269238  585602 cri.go:89] found id: ""
	I1205 20:34:12.269263  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.269271  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:12.269278  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:12.269340  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:12.307965  585602 cri.go:89] found id: ""
	I1205 20:34:12.308006  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.308016  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:12.308022  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:12.308081  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:12.343463  585602 cri.go:89] found id: ""
	I1205 20:34:12.343497  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.343510  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:12.343536  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:12.343574  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:12.393393  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:12.393437  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:12.407991  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:12.408025  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:12.477868  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:12.477910  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:12.477924  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:12.557274  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:12.557315  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:15.102587  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:15.115734  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:15.115808  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:15.153057  585602 cri.go:89] found id: ""
	I1205 20:34:15.153091  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.153105  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:15.153113  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:15.153182  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:15.192762  585602 cri.go:89] found id: ""
	I1205 20:34:15.192815  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.192825  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:15.192831  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:15.192887  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:15.231330  585602 cri.go:89] found id: ""
	I1205 20:34:15.231364  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.231374  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:15.231380  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:15.231435  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:15.265229  585602 cri.go:89] found id: ""
	I1205 20:34:15.265262  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.265271  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:15.265278  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:15.265350  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:15.299596  585602 cri.go:89] found id: ""
	I1205 20:34:15.299624  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.299634  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:15.299640  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:15.299699  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:15.336155  585602 cri.go:89] found id: ""
	I1205 20:34:15.336187  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.336195  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:15.336202  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:15.336256  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:15.371867  585602 cri.go:89] found id: ""
	I1205 20:34:15.371899  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.371909  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:15.371920  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:15.371976  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:15.408536  585602 cri.go:89] found id: ""
	I1205 20:34:15.408566  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.408580  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:15.408592  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:15.408609  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:15.422499  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:15.422538  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:15.495096  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:15.495131  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:15.495145  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:15.571411  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:15.571461  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:15.612284  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:15.612319  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:15.165343  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:17.165619  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:16.043962  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:18.542495  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:17.119936  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:19.622046  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:18.168869  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:18.184247  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:18.184370  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:18.226078  585602 cri.go:89] found id: ""
	I1205 20:34:18.226112  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.226124  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:18.226133  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:18.226202  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:18.266221  585602 cri.go:89] found id: ""
	I1205 20:34:18.266258  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.266270  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:18.266278  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:18.266349  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:18.305876  585602 cri.go:89] found id: ""
	I1205 20:34:18.305903  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.305912  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:18.305921  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:18.305971  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:18.342044  585602 cri.go:89] found id: ""
	I1205 20:34:18.342077  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.342089  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:18.342098  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:18.342160  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:18.380240  585602 cri.go:89] found id: ""
	I1205 20:34:18.380290  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.380301  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:18.380310  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:18.380372  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:18.416228  585602 cri.go:89] found id: ""
	I1205 20:34:18.416258  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.416301  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:18.416311  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:18.416380  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:18.453368  585602 cri.go:89] found id: ""
	I1205 20:34:18.453407  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.453420  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:18.453429  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:18.453513  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:18.491689  585602 cri.go:89] found id: ""
	I1205 20:34:18.491727  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.491739  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:18.491754  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:18.491779  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:18.546614  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:18.546652  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:18.560516  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:18.560547  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:18.637544  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:18.637568  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:18.637582  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:18.720410  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:18.720453  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:21.261494  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:21.276378  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:21.276473  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:21.317571  585602 cri.go:89] found id: ""
	I1205 20:34:21.317602  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.317610  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:21.317617  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:21.317670  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:21.355174  585602 cri.go:89] found id: ""
	I1205 20:34:21.355202  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.355210  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:21.355217  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:21.355277  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:21.393259  585602 cri.go:89] found id: ""
	I1205 20:34:21.393297  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.393310  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:21.393317  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:21.393408  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:21.432286  585602 cri.go:89] found id: ""
	I1205 20:34:21.432329  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.432341  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:21.432348  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:21.432415  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:21.469844  585602 cri.go:89] found id: ""
	I1205 20:34:21.469877  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.469888  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:21.469896  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:21.469964  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:21.508467  585602 cri.go:89] found id: ""
	I1205 20:34:21.508507  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.508519  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:21.508528  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:21.508592  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:21.553053  585602 cri.go:89] found id: ""
	I1205 20:34:21.553185  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.553208  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:21.553226  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:21.553317  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:21.590595  585602 cri.go:89] found id: ""
	I1205 20:34:21.590629  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.590640  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:21.590654  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:21.590672  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:21.649493  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:21.649546  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:21.666114  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:21.666147  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:21.742801  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:21.742828  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:21.742858  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:21.822949  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:21.823010  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:19.165951  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:21.664450  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:21.043233  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:23.043477  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:25.543490  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:22.119177  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:24.119685  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:24.366575  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:24.380894  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:24.380992  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:24.416907  585602 cri.go:89] found id: ""
	I1205 20:34:24.416943  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.416956  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:24.416965  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:24.417034  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:24.453303  585602 cri.go:89] found id: ""
	I1205 20:34:24.453337  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.453349  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:24.453358  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:24.453445  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:24.496795  585602 cri.go:89] found id: ""
	I1205 20:34:24.496825  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.496833  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:24.496839  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:24.496907  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:24.539105  585602 cri.go:89] found id: ""
	I1205 20:34:24.539142  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.539154  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:24.539162  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:24.539230  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:24.576778  585602 cri.go:89] found id: ""
	I1205 20:34:24.576808  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.576816  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:24.576822  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:24.576879  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:24.617240  585602 cri.go:89] found id: ""
	I1205 20:34:24.617271  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.617280  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:24.617293  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:24.617374  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:24.659274  585602 cri.go:89] found id: ""
	I1205 20:34:24.659316  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.659330  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:24.659342  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:24.659408  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:24.701047  585602 cri.go:89] found id: ""
	I1205 20:34:24.701092  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.701105  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:24.701121  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:24.701139  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:24.741070  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:24.741115  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:24.793364  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:24.793407  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:24.807803  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:24.807839  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:24.883194  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:24.883225  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:24.883243  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:24.163198  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:26.165402  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:27.544607  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:30.044244  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:26.619847  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:28.621467  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:30.621704  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:27.467460  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:27.483055  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:27.483129  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:27.523718  585602 cri.go:89] found id: ""
	I1205 20:34:27.523752  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.523763  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:27.523772  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:27.523841  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:27.562872  585602 cri.go:89] found id: ""
	I1205 20:34:27.562899  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.562908  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:27.562915  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:27.562976  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:27.601804  585602 cri.go:89] found id: ""
	I1205 20:34:27.601835  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.601845  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:27.601852  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:27.601916  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:27.640553  585602 cri.go:89] found id: ""
	I1205 20:34:27.640589  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.640599  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:27.640605  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:27.640672  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:27.680983  585602 cri.go:89] found id: ""
	I1205 20:34:27.681015  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.681027  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:27.681035  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:27.681105  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:27.720766  585602 cri.go:89] found id: ""
	I1205 20:34:27.720811  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.720821  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:27.720828  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:27.720886  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:27.761422  585602 cri.go:89] found id: ""
	I1205 20:34:27.761453  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.761466  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:27.761480  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:27.761550  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:27.799658  585602 cri.go:89] found id: ""
	I1205 20:34:27.799692  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.799705  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:27.799720  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:27.799736  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:27.851801  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:27.851845  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:27.865953  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:27.865984  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:27.941787  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:27.941824  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:27.941840  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:28.023556  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:28.023616  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:30.573267  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:30.586591  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:30.586679  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:30.629923  585602 cri.go:89] found id: ""
	I1205 20:34:30.629960  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.629974  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:30.629982  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:30.630048  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:30.667045  585602 cri.go:89] found id: ""
	I1205 20:34:30.667078  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.667090  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:30.667098  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:30.667167  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:30.704479  585602 cri.go:89] found id: ""
	I1205 20:34:30.704510  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.704522  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:30.704530  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:30.704620  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:30.746035  585602 cri.go:89] found id: ""
	I1205 20:34:30.746065  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.746077  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:30.746085  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:30.746161  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:30.784375  585602 cri.go:89] found id: ""
	I1205 20:34:30.784415  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.784425  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:30.784431  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:30.784487  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:30.821779  585602 cri.go:89] found id: ""
	I1205 20:34:30.821811  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.821822  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:30.821831  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:30.821905  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:30.856927  585602 cri.go:89] found id: ""
	I1205 20:34:30.856963  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.856976  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:30.856984  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:30.857088  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:30.895852  585602 cri.go:89] found id: ""
	I1205 20:34:30.895882  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.895894  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:30.895914  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:30.895930  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:30.947600  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:30.947642  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:30.962717  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:30.962753  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:31.049225  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:31.049262  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:31.049280  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:31.126806  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:31.126850  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:28.665006  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:31.164172  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:32.548634  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:35.042159  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:33.120370  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:35.621247  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:33.670844  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:33.685063  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:33.685160  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:33.718277  585602 cri.go:89] found id: ""
	I1205 20:34:33.718312  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.718321  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:33.718327  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:33.718378  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:33.755409  585602 cri.go:89] found id: ""
	I1205 20:34:33.755445  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.755456  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:33.755465  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:33.755542  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:33.809447  585602 cri.go:89] found id: ""
	I1205 20:34:33.809506  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.809519  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:33.809527  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:33.809599  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:33.848327  585602 cri.go:89] found id: ""
	I1205 20:34:33.848362  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.848376  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:33.848384  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:33.848444  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:33.887045  585602 cri.go:89] found id: ""
	I1205 20:34:33.887082  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.887094  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:33.887103  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:33.887178  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:33.924385  585602 cri.go:89] found id: ""
	I1205 20:34:33.924418  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.924427  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:33.924434  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:33.924499  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:33.960711  585602 cri.go:89] found id: ""
	I1205 20:34:33.960738  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.960747  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:33.960757  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:33.960808  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:33.998150  585602 cri.go:89] found id: ""
	I1205 20:34:33.998184  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.998193  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:33.998203  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:33.998215  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:34.041977  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:34.042006  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:34.095895  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:34.095940  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:34.109802  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:34.109836  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:34.185716  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:34.185740  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:34.185753  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:36.767768  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:36.782114  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:36.782201  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:36.820606  585602 cri.go:89] found id: ""
	I1205 20:34:36.820647  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.820659  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:36.820668  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:36.820736  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:33.164572  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:35.664069  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:37.043102  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:39.544667  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:38.120555  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:40.619948  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:36.858999  585602 cri.go:89] found id: ""
	I1205 20:34:36.859033  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.859044  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:36.859051  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:36.859117  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:36.896222  585602 cri.go:89] found id: ""
	I1205 20:34:36.896257  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.896282  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:36.896290  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:36.896352  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:36.935565  585602 cri.go:89] found id: ""
	I1205 20:34:36.935602  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.935612  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:36.935618  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:36.935671  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:36.974031  585602 cri.go:89] found id: ""
	I1205 20:34:36.974066  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.974079  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:36.974096  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:36.974166  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:37.018243  585602 cri.go:89] found id: ""
	I1205 20:34:37.018278  585602 logs.go:282] 0 containers: []
	W1205 20:34:37.018290  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:37.018300  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:37.018371  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:37.057715  585602 cri.go:89] found id: ""
	I1205 20:34:37.057742  585602 logs.go:282] 0 containers: []
	W1205 20:34:37.057750  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:37.057756  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:37.057806  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:37.099006  585602 cri.go:89] found id: ""
	I1205 20:34:37.099037  585602 logs.go:282] 0 containers: []
	W1205 20:34:37.099045  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:37.099055  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:37.099070  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:37.186218  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:37.186264  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:37.232921  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:37.232955  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:37.285539  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:37.285581  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:37.301115  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:37.301155  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:37.373249  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:39.873692  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:39.887772  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:39.887847  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:39.925558  585602 cri.go:89] found id: ""
	I1205 20:34:39.925595  585602 logs.go:282] 0 containers: []
	W1205 20:34:39.925607  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:39.925615  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:39.925684  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:39.964967  585602 cri.go:89] found id: ""
	I1205 20:34:39.964994  585602 logs.go:282] 0 containers: []
	W1205 20:34:39.965004  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:39.965011  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:39.965073  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:40.010875  585602 cri.go:89] found id: ""
	I1205 20:34:40.010911  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.010923  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:40.010930  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:40.011003  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:40.050940  585602 cri.go:89] found id: ""
	I1205 20:34:40.050970  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.050981  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:40.050990  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:40.051052  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:40.086157  585602 cri.go:89] found id: ""
	I1205 20:34:40.086197  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.086210  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:40.086219  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:40.086283  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:40.123280  585602 cri.go:89] found id: ""
	I1205 20:34:40.123321  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.123333  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:40.123344  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:40.123414  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:40.164755  585602 cri.go:89] found id: ""
	I1205 20:34:40.164784  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.164793  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:40.164800  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:40.164871  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:40.211566  585602 cri.go:89] found id: ""
	I1205 20:34:40.211595  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.211608  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:40.211621  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:40.211638  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:40.275269  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:40.275326  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:40.303724  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:40.303754  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:40.377315  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:40.377345  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:40.377360  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:40.457744  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:40.457794  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:38.163598  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:40.164173  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:42.663952  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:42.043947  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:44.542445  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:42.621824  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:45.120127  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:43.000390  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:43.015220  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:43.015308  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:43.051919  585602 cri.go:89] found id: ""
	I1205 20:34:43.051946  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.051955  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:43.051961  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:43.052034  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:43.088188  585602 cri.go:89] found id: ""
	I1205 20:34:43.088230  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.088241  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:43.088249  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:43.088350  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:43.125881  585602 cri.go:89] found id: ""
	I1205 20:34:43.125910  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.125922  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:43.125930  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:43.125988  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:43.166630  585602 cri.go:89] found id: ""
	I1205 20:34:43.166657  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.166674  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:43.166682  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:43.166744  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:43.206761  585602 cri.go:89] found id: ""
	I1205 20:34:43.206791  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.206803  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:43.206810  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:43.206873  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:43.242989  585602 cri.go:89] found id: ""
	I1205 20:34:43.243017  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.243026  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:43.243033  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:43.243094  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:43.281179  585602 cri.go:89] found id: ""
	I1205 20:34:43.281208  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.281217  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:43.281223  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:43.281272  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:43.317283  585602 cri.go:89] found id: ""
	I1205 20:34:43.317314  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.317326  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:43.317347  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:43.317362  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:43.369262  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:43.369303  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:43.386137  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:43.386182  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:43.458532  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:43.458553  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:43.458566  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:43.538254  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:43.538296  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:46.083593  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:46.101024  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:46.101133  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:46.169786  585602 cri.go:89] found id: ""
	I1205 20:34:46.169817  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.169829  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:46.169838  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:46.169905  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:46.218647  585602 cri.go:89] found id: ""
	I1205 20:34:46.218689  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.218704  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:46.218713  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:46.218790  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:46.262718  585602 cri.go:89] found id: ""
	I1205 20:34:46.262749  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.262758  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:46.262764  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:46.262846  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:46.301606  585602 cri.go:89] found id: ""
	I1205 20:34:46.301638  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.301649  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:46.301656  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:46.301714  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:46.337313  585602 cri.go:89] found id: ""
	I1205 20:34:46.337347  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.337356  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:46.337362  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:46.337422  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:46.380171  585602 cri.go:89] found id: ""
	I1205 20:34:46.380201  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.380209  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:46.380215  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:46.380288  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:46.423054  585602 cri.go:89] found id: ""
	I1205 20:34:46.423089  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.423101  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:46.423109  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:46.423178  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:46.467615  585602 cri.go:89] found id: ""
	I1205 20:34:46.467647  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.467659  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:46.467673  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:46.467687  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:46.522529  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:46.522579  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:46.537146  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:46.537199  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:46.609585  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:46.609618  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:46.609637  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:46.696093  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:46.696152  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:45.164249  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:47.664159  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:46.547883  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:49.043793  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:47.623375  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:50.122680  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:49.238735  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:49.256406  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:49.256484  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:49.294416  585602 cri.go:89] found id: ""
	I1205 20:34:49.294449  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.294458  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:49.294467  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:49.294528  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:49.334235  585602 cri.go:89] found id: ""
	I1205 20:34:49.334268  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.334282  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:49.334290  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:49.334362  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:49.372560  585602 cri.go:89] found id: ""
	I1205 20:34:49.372637  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.372662  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:49.372674  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:49.372756  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:49.413779  585602 cri.go:89] found id: ""
	I1205 20:34:49.413813  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.413822  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:49.413829  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:49.413900  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:49.449513  585602 cri.go:89] found id: ""
	I1205 20:34:49.449543  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.449553  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:49.449560  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:49.449630  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:49.488923  585602 cri.go:89] found id: ""
	I1205 20:34:49.488961  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.488973  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:49.488982  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:49.489050  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:49.524922  585602 cri.go:89] found id: ""
	I1205 20:34:49.524959  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.524971  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:49.524980  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:49.525048  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:49.565700  585602 cri.go:89] found id: ""
	I1205 20:34:49.565735  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.565745  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:49.565756  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:49.565769  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:49.624297  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:49.624339  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:49.641424  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:49.641465  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:49.721474  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:49.721504  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:49.721517  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:49.810777  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:49.810822  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:49.664998  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:52.163337  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:51.543015  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:54.045218  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:52.621649  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:55.120035  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:52.354661  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:52.368481  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:52.368555  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:52.407081  585602 cri.go:89] found id: ""
	I1205 20:34:52.407110  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.407118  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:52.407125  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:52.407189  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:52.444462  585602 cri.go:89] found id: ""
	I1205 20:34:52.444489  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.444498  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:52.444505  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:52.444562  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:52.483546  585602 cri.go:89] found id: ""
	I1205 20:34:52.483573  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.483582  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:52.483595  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:52.483648  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:52.526529  585602 cri.go:89] found id: ""
	I1205 20:34:52.526567  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.526579  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:52.526587  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:52.526655  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:52.564875  585602 cri.go:89] found id: ""
	I1205 20:34:52.564904  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.564913  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:52.564919  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:52.564984  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:52.599367  585602 cri.go:89] found id: ""
	I1205 20:34:52.599397  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.599410  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:52.599419  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:52.599475  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:52.638192  585602 cri.go:89] found id: ""
	I1205 20:34:52.638233  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.638247  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:52.638255  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:52.638336  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:52.675227  585602 cri.go:89] found id: ""
	I1205 20:34:52.675264  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.675275  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:52.675287  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:52.675311  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:52.716538  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:52.716582  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:52.772121  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:52.772162  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:52.787598  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:52.787632  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:52.865380  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:52.865408  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:52.865422  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:55.449288  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:55.462386  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:55.462474  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:55.498350  585602 cri.go:89] found id: ""
	I1205 20:34:55.498382  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.498391  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:55.498397  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:55.498457  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:55.540878  585602 cri.go:89] found id: ""
	I1205 20:34:55.540915  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.540929  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:55.540939  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:55.541022  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:55.577248  585602 cri.go:89] found id: ""
	I1205 20:34:55.577277  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.577288  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:55.577294  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:55.577375  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:55.615258  585602 cri.go:89] found id: ""
	I1205 20:34:55.615287  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.615308  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:55.615316  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:55.615384  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:55.652102  585602 cri.go:89] found id: ""
	I1205 20:34:55.652136  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.652147  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:55.652157  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:55.652228  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:55.689353  585602 cri.go:89] found id: ""
	I1205 20:34:55.689387  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.689399  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:55.689408  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:55.689486  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:55.727603  585602 cri.go:89] found id: ""
	I1205 20:34:55.727634  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.727648  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:55.727657  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:55.727729  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:55.765103  585602 cri.go:89] found id: ""
	I1205 20:34:55.765134  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.765143  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:55.765156  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:55.765169  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:55.823878  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:55.823923  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:55.838966  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:55.839001  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:55.909385  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:55.909412  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:55.909424  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:55.992036  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:55.992080  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:54.165488  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:56.166030  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:56.542663  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:58.543260  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:57.120140  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:59.621190  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:58.537231  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:58.552307  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:58.552392  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:58.589150  585602 cri.go:89] found id: ""
	I1205 20:34:58.589184  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.589200  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:58.589206  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:58.589272  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:58.630344  585602 cri.go:89] found id: ""
	I1205 20:34:58.630370  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.630378  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:58.630385  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:58.630452  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:58.669953  585602 cri.go:89] found id: ""
	I1205 20:34:58.669981  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.669991  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:58.669999  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:58.670055  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:58.708532  585602 cri.go:89] found id: ""
	I1205 20:34:58.708562  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.708570  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:58.708577  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:58.708631  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:58.745944  585602 cri.go:89] found id: ""
	I1205 20:34:58.745975  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.745986  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:58.745994  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:58.746051  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:58.787177  585602 cri.go:89] found id: ""
	I1205 20:34:58.787206  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.787214  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:58.787221  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:58.787272  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:58.822084  585602 cri.go:89] found id: ""
	I1205 20:34:58.822123  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.822134  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:58.822142  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:58.822210  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:58.858608  585602 cri.go:89] found id: ""
	I1205 20:34:58.858645  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.858657  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:58.858670  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:58.858691  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:58.873289  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:58.873322  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:58.947855  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:58.947884  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:58.947900  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:59.028348  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:59.028397  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:59.069172  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:59.069206  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:01.623309  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:01.637362  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:01.637449  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:01.678867  585602 cri.go:89] found id: ""
	I1205 20:35:01.678907  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.678919  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:01.678928  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:01.679001  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:01.715333  585602 cri.go:89] found id: ""
	I1205 20:35:01.715364  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.715372  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:01.715379  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:01.715439  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:01.754247  585602 cri.go:89] found id: ""
	I1205 20:35:01.754277  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.754286  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:01.754292  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:01.754348  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:01.791922  585602 cri.go:89] found id: ""
	I1205 20:35:01.791957  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.791968  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:01.791977  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:01.792045  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:01.827261  585602 cri.go:89] found id: ""
	I1205 20:35:01.827294  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.827307  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:01.827315  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:01.827389  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:58.665248  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:01.163431  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:01.043056  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:03.543015  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:02.122540  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:04.620544  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:01.864205  585602 cri.go:89] found id: ""
	I1205 20:35:01.864234  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.864243  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:01.864249  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:01.864332  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:01.902740  585602 cri.go:89] found id: ""
	I1205 20:35:01.902773  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.902783  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:01.902789  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:01.902857  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:01.941627  585602 cri.go:89] found id: ""
	I1205 20:35:01.941657  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.941666  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:01.941677  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:01.941690  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:01.995743  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:01.995791  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:02.010327  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:02.010368  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:02.086879  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:02.086907  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:02.086921  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:02.166500  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:02.166538  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:04.716638  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:04.730922  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:04.730992  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:04.768492  585602 cri.go:89] found id: ""
	I1205 20:35:04.768524  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.768534  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:04.768540  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:04.768606  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:04.803740  585602 cri.go:89] found id: ""
	I1205 20:35:04.803776  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.803789  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:04.803797  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:04.803866  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:04.840907  585602 cri.go:89] found id: ""
	I1205 20:35:04.840947  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.840960  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:04.840968  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:04.841036  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:04.875901  585602 cri.go:89] found id: ""
	I1205 20:35:04.875933  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.875943  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:04.875949  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:04.876003  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:04.913581  585602 cri.go:89] found id: ""
	I1205 20:35:04.913617  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.913627  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:04.913634  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:04.913689  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:04.952460  585602 cri.go:89] found id: ""
	I1205 20:35:04.952504  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.952519  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:04.952528  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:04.952617  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:04.989939  585602 cri.go:89] found id: ""
	I1205 20:35:04.989968  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.989979  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:04.989985  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:04.990041  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:05.025017  585602 cri.go:89] found id: ""
	I1205 20:35:05.025052  585602 logs.go:282] 0 containers: []
	W1205 20:35:05.025066  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:05.025078  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:05.025094  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:05.068179  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:05.068223  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:05.127311  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:05.127369  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:05.141092  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:05.141129  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:05.217648  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:05.217678  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:05.217691  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:03.163987  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:05.164131  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:07.165804  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:06.043765  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:08.036400  585113 pod_ready.go:82] duration metric: took 4m0.000157493s for pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace to be "Ready" ...
	E1205 20:35:08.036457  585113 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace to be "Ready" (will not retry!)
	I1205 20:35:08.036489  585113 pod_ready.go:39] duration metric: took 4m11.05050249s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:35:08.036554  585113 kubeadm.go:597] duration metric: took 4m18.178903617s to restartPrimaryControlPlane
	W1205 20:35:08.036733  585113 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 20:35:08.036784  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:35:06.621887  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:09.119692  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:07.793457  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:07.808710  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:07.808778  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:07.846331  585602 cri.go:89] found id: ""
	I1205 20:35:07.846366  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.846380  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:07.846389  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:07.846462  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:07.881185  585602 cri.go:89] found id: ""
	I1205 20:35:07.881222  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.881236  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:07.881243  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:07.881307  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:07.918463  585602 cri.go:89] found id: ""
	I1205 20:35:07.918501  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.918514  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:07.918522  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:07.918589  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:07.956329  585602 cri.go:89] found id: ""
	I1205 20:35:07.956364  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.956375  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:07.956385  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:07.956456  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:07.992173  585602 cri.go:89] found id: ""
	I1205 20:35:07.992212  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.992222  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:07.992229  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:07.992318  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:08.030183  585602 cri.go:89] found id: ""
	I1205 20:35:08.030214  585602 logs.go:282] 0 containers: []
	W1205 20:35:08.030226  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:08.030235  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:08.030309  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:08.072320  585602 cri.go:89] found id: ""
	I1205 20:35:08.072362  585602 logs.go:282] 0 containers: []
	W1205 20:35:08.072374  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:08.072382  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:08.072452  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:08.124220  585602 cri.go:89] found id: ""
	I1205 20:35:08.124253  585602 logs.go:282] 0 containers: []
	W1205 20:35:08.124277  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:08.124292  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:08.124310  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:08.171023  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:08.171057  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:08.237645  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:08.237699  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:08.252708  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:08.252744  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:08.343107  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:08.343140  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:08.343158  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:10.919646  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:10.934494  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:10.934562  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:10.971816  585602 cri.go:89] found id: ""
	I1205 20:35:10.971855  585602 logs.go:282] 0 containers: []
	W1205 20:35:10.971868  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:10.971878  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:10.971950  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:11.010031  585602 cri.go:89] found id: ""
	I1205 20:35:11.010071  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.010084  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:11.010095  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:11.010170  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:11.046520  585602 cri.go:89] found id: ""
	I1205 20:35:11.046552  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.046561  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:11.046568  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:11.046632  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:11.081385  585602 cri.go:89] found id: ""
	I1205 20:35:11.081426  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.081440  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:11.081448  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:11.081522  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:11.122529  585602 cri.go:89] found id: ""
	I1205 20:35:11.122559  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.122568  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:11.122576  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:11.122656  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:11.161684  585602 cri.go:89] found id: ""
	I1205 20:35:11.161767  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.161788  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:11.161797  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:11.161862  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:11.199796  585602 cri.go:89] found id: ""
	I1205 20:35:11.199824  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.199833  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:11.199842  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:11.199916  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:11.235580  585602 cri.go:89] found id: ""
	I1205 20:35:11.235617  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.235625  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:11.235635  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:11.235647  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:11.291005  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:11.291055  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:11.305902  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:11.305947  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:11.375862  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:11.375894  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:11.375915  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:11.456701  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:11.456746  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:09.663952  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:11.664200  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:11.119954  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:13.120903  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:15.622247  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:14.006509  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:14.020437  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:14.020531  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:14.056878  585602 cri.go:89] found id: ""
	I1205 20:35:14.056905  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.056915  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:14.056923  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:14.056993  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:14.091747  585602 cri.go:89] found id: ""
	I1205 20:35:14.091782  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.091792  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:14.091800  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:14.091860  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:14.131409  585602 cri.go:89] found id: ""
	I1205 20:35:14.131440  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.131453  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:14.131461  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:14.131532  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:14.170726  585602 cri.go:89] found id: ""
	I1205 20:35:14.170754  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.170765  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:14.170773  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:14.170851  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:14.208619  585602 cri.go:89] found id: ""
	I1205 20:35:14.208654  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.208666  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:14.208674  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:14.208747  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:14.247734  585602 cri.go:89] found id: ""
	I1205 20:35:14.247771  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.247784  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:14.247793  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:14.247855  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:14.296090  585602 cri.go:89] found id: ""
	I1205 20:35:14.296119  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.296129  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:14.296136  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:14.296205  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:14.331009  585602 cri.go:89] found id: ""
	I1205 20:35:14.331037  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.331045  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:14.331057  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:14.331070  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:14.384877  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:14.384935  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:14.400458  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:14.400507  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:14.475745  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:14.475774  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:14.475787  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:14.553150  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:14.553192  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:14.164516  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:16.165316  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:18.119418  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:20.120499  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:17.095700  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:17.109135  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:17.109215  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:17.146805  585602 cri.go:89] found id: ""
	I1205 20:35:17.146838  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.146851  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:17.146861  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:17.146919  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:17.186861  585602 cri.go:89] found id: ""
	I1205 20:35:17.186891  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.186901  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:17.186907  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:17.186960  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:17.223113  585602 cri.go:89] found id: ""
	I1205 20:35:17.223148  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.223159  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:17.223166  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:17.223238  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:17.263066  585602 cri.go:89] found id: ""
	I1205 20:35:17.263098  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.263110  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:17.263118  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:17.263187  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:17.300113  585602 cri.go:89] found id: ""
	I1205 20:35:17.300153  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.300167  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:17.300175  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:17.300237  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:17.339135  585602 cri.go:89] found id: ""
	I1205 20:35:17.339172  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.339184  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:17.339193  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:17.339260  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:17.376200  585602 cri.go:89] found id: ""
	I1205 20:35:17.376229  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.376239  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:17.376248  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:17.376354  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:17.411852  585602 cri.go:89] found id: ""
	I1205 20:35:17.411895  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.411906  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:17.411919  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:17.411948  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:17.463690  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:17.463729  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:17.478912  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:17.478946  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:17.552874  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:17.552907  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:17.552933  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:17.633621  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:17.633667  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:20.175664  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:20.191495  585602 kubeadm.go:597] duration metric: took 4m4.568774806s to restartPrimaryControlPlane
	W1205 20:35:20.191570  585602 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 20:35:20.191594  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:35:20.660014  585602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:35:20.676684  585602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:35:20.688338  585602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:35:20.699748  585602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:35:20.699770  585602 kubeadm.go:157] found existing configuration files:
	
	I1205 20:35:20.699822  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:35:20.710417  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:35:20.710497  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:35:20.722295  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:35:20.732854  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:35:20.732933  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:35:20.744242  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:35:20.754593  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:35:20.754671  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:35:20.766443  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:35:20.777087  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:35:20.777157  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:35:20.788406  585602 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:35:20.869602  585602 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 20:35:20.869778  585602 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:35:21.022417  585602 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:35:21.022558  585602 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:35:21.022715  585602 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:35:21.213817  585602 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:35:21.216995  585602 out.go:235]   - Generating certificates and keys ...
	I1205 20:35:21.217146  585602 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:35:21.217240  585602 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:35:21.217373  585602 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:35:21.217502  585602 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:35:21.217614  585602 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:35:21.217699  585602 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 20:35:21.217784  585602 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:35:21.217876  585602 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:35:21.217985  585602 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:35:21.218129  585602 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:35:21.218186  585602 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 20:35:21.218289  585602 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:35:21.337924  585602 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:35:21.464355  585602 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:35:21.709734  585602 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:35:21.837040  585602 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:35:21.860767  585602 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:35:21.860894  585602 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:35:21.860934  585602 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:35:22.002564  585602 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:35:18.663978  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:20.665113  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:22.622593  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:25.120101  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:22.004407  585602 out.go:235]   - Booting up control plane ...
	I1205 20:35:22.004560  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:35:22.009319  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:35:22.010412  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:35:22.019041  585602 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:35:22.021855  585602 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:35:23.163493  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:25.164833  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:27.164914  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:27.619140  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:29.622476  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:29.664525  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:32.163413  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:34.411201  585113 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.37438104s)
	I1205 20:35:34.411295  585113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:35:34.428580  585113 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:35:34.439233  585113 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:35:34.450165  585113 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:35:34.450192  585113 kubeadm.go:157] found existing configuration files:
	
	I1205 20:35:34.450255  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:35:34.461910  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:35:34.461985  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:35:34.473936  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:35:34.484160  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:35:34.484240  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:35:34.495772  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:35:34.507681  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:35:34.507757  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:35:34.519932  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:35:34.532111  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:35:34.532190  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:35:34.543360  585113 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:35:34.594095  585113 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 20:35:34.594214  585113 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:35:34.712502  585113 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:35:34.712685  585113 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:35:34.712818  585113 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 20:35:34.729419  585113 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:35:34.731281  585113 out.go:235]   - Generating certificates and keys ...
	I1205 20:35:34.731395  585113 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:35:34.731486  585113 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:35:34.731614  585113 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:35:34.731715  585113 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:35:34.731812  585113 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:35:34.731902  585113 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 20:35:34.731994  585113 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:35:34.732082  585113 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:35:34.732179  585113 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:35:34.732252  585113 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:35:34.732336  585113 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 20:35:34.732428  585113 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:35:35.125135  585113 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:35:35.188591  585113 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 20:35:35.330713  585113 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:35:35.497785  585113 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:35:35.839010  585113 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:35:35.839656  585113 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:35:35.842311  585113 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:35:32.118898  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:34.119153  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:34.164007  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:36.164138  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:35.844403  585113 out.go:235]   - Booting up control plane ...
	I1205 20:35:35.844534  585113 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:35:35.844602  585113 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:35:35.845242  585113 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:35:35.865676  585113 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:35:35.871729  585113 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:35:35.871825  585113 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:35:36.007728  585113 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 20:35:36.007948  585113 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 20:35:36.510090  585113 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.141078ms
	I1205 20:35:36.510208  585113 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 20:35:36.119432  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:38.121093  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:40.620523  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:41.512166  585113 kubeadm.go:310] [api-check] The API server is healthy after 5.00243802s
	I1205 20:35:41.529257  585113 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:35:41.545958  585113 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:35:41.585500  585113 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:35:41.585726  585113 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-789000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:35:41.606394  585113 kubeadm.go:310] [bootstrap-token] Using token: j30n5x.myrhz9pya6yl1f1z
	I1205 20:35:41.608046  585113 out.go:235]   - Configuring RBAC rules ...
	I1205 20:35:41.608229  585113 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:35:41.616083  585113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:35:41.625777  585113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:35:41.629934  585113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:35:41.633726  585113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:35:41.640454  585113 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:35:41.923125  585113 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:35:42.363841  585113 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 20:35:42.924569  585113 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 20:35:42.924594  585113 kubeadm.go:310] 
	I1205 20:35:42.924660  585113 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 20:35:42.924668  585113 kubeadm.go:310] 
	I1205 20:35:42.924750  585113 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 20:35:42.924768  585113 kubeadm.go:310] 
	I1205 20:35:42.924802  585113 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 20:35:42.924865  585113 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:35:42.924926  585113 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:35:42.924969  585113 kubeadm.go:310] 
	I1205 20:35:42.925060  585113 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 20:35:42.925069  585113 kubeadm.go:310] 
	I1205 20:35:42.925120  585113 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:35:42.925154  585113 kubeadm.go:310] 
	I1205 20:35:42.925255  585113 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 20:35:42.925374  585113 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:35:42.925477  585113 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:35:42.925488  585113 kubeadm.go:310] 
	I1205 20:35:42.925604  585113 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:35:42.925691  585113 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 20:35:42.925701  585113 kubeadm.go:310] 
	I1205 20:35:42.925830  585113 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token j30n5x.myrhz9pya6yl1f1z \
	I1205 20:35:42.925966  585113 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 \
	I1205 20:35:42.926019  585113 kubeadm.go:310] 	--control-plane 
	I1205 20:35:42.926034  585113 kubeadm.go:310] 
	I1205 20:35:42.926136  585113 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:35:42.926147  585113 kubeadm.go:310] 
	I1205 20:35:42.926258  585113 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token j30n5x.myrhz9pya6yl1f1z \
	I1205 20:35:42.926400  585113 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 
	I1205 20:35:42.927105  585113 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:35:42.927269  585113 cni.go:84] Creating CNI manager for ""
	I1205 20:35:42.927283  585113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:35:42.929046  585113 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:35:38.164698  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:40.665499  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:42.930620  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:35:42.941706  585113 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:35:42.964041  585113 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:35:42.964154  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:42.964191  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-789000 minikube.k8s.io/updated_at=2024_12_05T20_35_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=embed-certs-789000 minikube.k8s.io/primary=true
	I1205 20:35:43.027876  585113 ops.go:34] apiserver oom_adj: -16
	I1205 20:35:43.203087  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:43.703446  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:44.203895  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:44.703277  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:45.203421  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:42.623820  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:45.118957  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:45.704129  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:46.203682  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:46.703213  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:47.203225  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:47.330051  585113 kubeadm.go:1113] duration metric: took 4.365966546s to wait for elevateKubeSystemPrivileges
	I1205 20:35:47.330104  585113 kubeadm.go:394] duration metric: took 4m57.530103825s to StartCluster
	I1205 20:35:47.330143  585113 settings.go:142] acquiring lock: {Name:mk53b9e6d652790a330d8f10370186624dd74692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:35:47.330296  585113 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:35:47.332937  585113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:35:47.333273  585113 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:35:47.333380  585113 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 20:35:47.333478  585113 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-789000"
	I1205 20:35:47.333500  585113 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-789000"
	I1205 20:35:47.333499  585113 addons.go:69] Setting default-storageclass=true in profile "embed-certs-789000"
	W1205 20:35:47.333510  585113 addons.go:243] addon storage-provisioner should already be in state true
	I1205 20:35:47.333523  585113 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-789000"
	I1205 20:35:47.333545  585113 host.go:66] Checking if "embed-certs-789000" exists ...
	I1205 20:35:47.333554  585113 config.go:182] Loaded profile config "embed-certs-789000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:35:47.333631  585113 addons.go:69] Setting metrics-server=true in profile "embed-certs-789000"
	I1205 20:35:47.333651  585113 addons.go:234] Setting addon metrics-server=true in "embed-certs-789000"
	W1205 20:35:47.333660  585113 addons.go:243] addon metrics-server should already be in state true
	I1205 20:35:47.333692  585113 host.go:66] Checking if "embed-certs-789000" exists ...
	I1205 20:35:47.334001  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.334043  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.334003  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.334101  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.334157  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.334339  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.335448  585113 out.go:177] * Verifying Kubernetes components...
	I1205 20:35:47.337056  585113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:35:47.353039  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33827
	I1205 20:35:47.353726  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.354437  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.354467  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.354870  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.355580  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.355654  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.355702  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43665
	I1205 20:35:47.355760  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46205
	I1205 20:35:47.356180  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.356224  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.356771  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.356796  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.356815  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.356834  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.357246  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.357245  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.357640  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetState
	I1205 20:35:47.357862  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.357916  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.361951  585113 addons.go:234] Setting addon default-storageclass=true in "embed-certs-789000"
	W1205 20:35:47.361974  585113 addons.go:243] addon default-storageclass should already be in state true
	I1205 20:35:47.362004  585113 host.go:66] Checking if "embed-certs-789000" exists ...
	I1205 20:35:47.362369  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.362416  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.372862  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37823
	I1205 20:35:47.373465  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.373983  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.374011  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.374347  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.374570  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetState
	I1205 20:35:47.376329  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:35:47.378476  585113 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:35:47.379882  585113 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:35:47.379909  585113 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:35:47.379933  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:35:47.382045  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44707
	I1205 20:35:47.382855  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.383440  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.383459  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.383563  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.383828  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.384092  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetState
	I1205 20:35:47.384101  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:35:47.384117  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.384150  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39829
	I1205 20:35:47.384381  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:35:47.384517  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:35:47.384635  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.384705  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:35:47.384850  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:35:47.385249  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.385262  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.385613  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.385744  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:35:47.386054  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.386085  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.387649  585113 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:35:43.164980  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:45.665449  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:47.665725  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:47.388998  585113 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:35:47.389011  585113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:35:47.389025  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:35:47.391724  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.392285  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:35:47.392317  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.392362  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:35:47.392521  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:35:47.392663  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:35:47.392804  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:35:47.402558  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45343
	I1205 20:35:47.403109  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.403636  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.403653  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.403977  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.404155  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetState
	I1205 20:35:47.405636  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:35:47.405859  585113 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:35:47.405876  585113 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:35:47.405894  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:35:47.408366  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.408827  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:35:47.408868  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.409107  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:35:47.409276  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:35:47.409436  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:35:47.409577  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:35:47.589046  585113 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:35:47.620164  585113 node_ready.go:35] waiting up to 6m0s for node "embed-certs-789000" to be "Ready" ...
	I1205 20:35:47.635800  585113 node_ready.go:49] node "embed-certs-789000" has status "Ready":"True"
	I1205 20:35:47.635824  585113 node_ready.go:38] duration metric: took 15.625152ms for node "embed-certs-789000" to be "Ready" ...
	I1205 20:35:47.635836  585113 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:35:47.647842  585113 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6mp2h" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:47.738529  585113 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:35:47.738558  585113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:35:47.741247  585113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:35:47.741443  585113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:35:47.822503  585113 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:35:47.822543  585113 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:35:47.886482  585113 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:35:47.886512  585113 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:35:47.926018  585113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:35:48.100013  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:48.100059  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:48.100371  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:48.100392  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:48.100408  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:48.100416  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:48.102261  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Closing plugin on server side
	I1205 20:35:48.102313  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:48.102342  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:48.115407  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:48.115429  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:48.115762  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:48.115859  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:48.115870  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Closing plugin on server side
	I1205 20:35:48.721035  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:48.721068  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:48.721380  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:48.721400  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:48.721447  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:48.721465  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:48.721855  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Closing plugin on server side
	I1205 20:35:48.721868  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:48.721880  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:49.294512  585113 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.36844122s)
	I1205 20:35:49.294581  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:49.294598  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:49.294953  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Closing plugin on server side
	I1205 20:35:49.295014  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:49.295028  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:49.295057  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:49.295071  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:49.295341  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Closing plugin on server side
	I1205 20:35:49.295391  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:49.295403  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:49.295414  585113 addons.go:475] Verifying addon metrics-server=true in "embed-certs-789000"
	I1205 20:35:49.297183  585113 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:35:49.298509  585113 addons.go:510] duration metric: took 1.965140064s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:35:49.657195  585113 pod_ready.go:103] pod "coredns-7c65d6cfc9-6mp2h" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:47.121445  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:49.622568  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:50.163712  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:52.165654  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:52.155012  585113 pod_ready.go:103] pod "coredns-7c65d6cfc9-6mp2h" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:54.155309  585113 pod_ready.go:93] pod "coredns-7c65d6cfc9-6mp2h" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:54.155346  585113 pod_ready.go:82] duration metric: took 6.507465102s for pod "coredns-7c65d6cfc9-6mp2h" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:54.155356  585113 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rh6pj" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:54.160866  585113 pod_ready.go:93] pod "coredns-7c65d6cfc9-rh6pj" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:54.160895  585113 pod_ready.go:82] duration metric: took 5.529623ms for pod "coredns-7c65d6cfc9-rh6pj" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:54.160909  585113 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:54.166444  585113 pod_ready.go:93] pod "etcd-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:54.166475  585113 pod_ready.go:82] duration metric: took 5.558605ms for pod "etcd-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:54.166487  585113 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:52.118202  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:54.119543  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:54.664661  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:57.162802  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:56.172832  585113 pod_ready.go:103] pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:57.173005  585113 pod_ready.go:93] pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:57.173052  585113 pod_ready.go:82] duration metric: took 3.006542827s for pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.173068  585113 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.178461  585113 pod_ready.go:93] pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:57.178489  585113 pod_ready.go:82] duration metric: took 5.413563ms for pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.178499  585113 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-znjpk" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.183130  585113 pod_ready.go:93] pod "kube-proxy-znjpk" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:57.183162  585113 pod_ready.go:82] duration metric: took 4.655743ms for pod "kube-proxy-znjpk" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.183178  585113 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.351816  585113 pod_ready.go:93] pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:57.351842  585113 pod_ready.go:82] duration metric: took 168.656328ms for pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.351851  585113 pod_ready.go:39] duration metric: took 9.716003373s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:35:57.351866  585113 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:35:57.351921  585113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:57.368439  585113 api_server.go:72] duration metric: took 10.035127798s to wait for apiserver process to appear ...
	I1205 20:35:57.368471  585113 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:35:57.368496  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:35:57.372531  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I1205 20:35:57.373449  585113 api_server.go:141] control plane version: v1.31.2
	I1205 20:35:57.373466  585113 api_server.go:131] duration metric: took 4.987422ms to wait for apiserver health ...
	I1205 20:35:57.373474  585113 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:35:57.554591  585113 system_pods.go:59] 9 kube-system pods found
	I1205 20:35:57.554620  585113 system_pods.go:61] "coredns-7c65d6cfc9-6mp2h" [01aaefd9-c549-4065-b3dd-a0e4d925e592] Running
	I1205 20:35:57.554625  585113 system_pods.go:61] "coredns-7c65d6cfc9-rh6pj" [4bdd8a47-abec-4dc4-a1ed-4a9a124417a3] Running
	I1205 20:35:57.554629  585113 system_pods.go:61] "etcd-embed-certs-789000" [356d7981-ab7a-40bf-866f-0285986f9a8d] Running
	I1205 20:35:57.554633  585113 system_pods.go:61] "kube-apiserver-embed-certs-789000" [bddc43d8-26f1-462b-a90b-8a4093bbb427] Running
	I1205 20:35:57.554637  585113 system_pods.go:61] "kube-controller-manager-embed-certs-789000" [800f92d7-e6e2-4cb8-9cc7-90595f4b512b] Running
	I1205 20:35:57.554640  585113 system_pods.go:61] "kube-proxy-znjpk" [f3df1a22-d7e0-4a83-84dd-0e710185ded6] Running
	I1205 20:35:57.554643  585113 system_pods.go:61] "kube-scheduler-embed-certs-789000" [327e3f02-3092-49fb-bfac-fc0485f02db3] Running
	I1205 20:35:57.554649  585113 system_pods.go:61] "metrics-server-6867b74b74-cs42k" [98b266c3-8ff0-4dc6-9c43-374dcd7c074a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:35:57.554653  585113 system_pods.go:61] "storage-provisioner" [2808c8da-8904-45a0-ae68-bfd68681540f] Running
	I1205 20:35:57.554660  585113 system_pods.go:74] duration metric: took 181.180919ms to wait for pod list to return data ...
	I1205 20:35:57.554667  585113 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:35:57.757196  585113 default_sa.go:45] found service account: "default"
	I1205 20:35:57.757226  585113 default_sa.go:55] duration metric: took 202.553823ms for default service account to be created ...
	I1205 20:35:57.757236  585113 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:35:57.956943  585113 system_pods.go:86] 9 kube-system pods found
	I1205 20:35:57.956976  585113 system_pods.go:89] "coredns-7c65d6cfc9-6mp2h" [01aaefd9-c549-4065-b3dd-a0e4d925e592] Running
	I1205 20:35:57.956982  585113 system_pods.go:89] "coredns-7c65d6cfc9-rh6pj" [4bdd8a47-abec-4dc4-a1ed-4a9a124417a3] Running
	I1205 20:35:57.956985  585113 system_pods.go:89] "etcd-embed-certs-789000" [356d7981-ab7a-40bf-866f-0285986f9a8d] Running
	I1205 20:35:57.956989  585113 system_pods.go:89] "kube-apiserver-embed-certs-789000" [bddc43d8-26f1-462b-a90b-8a4093bbb427] Running
	I1205 20:35:57.956992  585113 system_pods.go:89] "kube-controller-manager-embed-certs-789000" [800f92d7-e6e2-4cb8-9cc7-90595f4b512b] Running
	I1205 20:35:57.956996  585113 system_pods.go:89] "kube-proxy-znjpk" [f3df1a22-d7e0-4a83-84dd-0e710185ded6] Running
	I1205 20:35:57.956999  585113 system_pods.go:89] "kube-scheduler-embed-certs-789000" [327e3f02-3092-49fb-bfac-fc0485f02db3] Running
	I1205 20:35:57.957005  585113 system_pods.go:89] "metrics-server-6867b74b74-cs42k" [98b266c3-8ff0-4dc6-9c43-374dcd7c074a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:35:57.957010  585113 system_pods.go:89] "storage-provisioner" [2808c8da-8904-45a0-ae68-bfd68681540f] Running
	I1205 20:35:57.957019  585113 system_pods.go:126] duration metric: took 199.777723ms to wait for k8s-apps to be running ...
	I1205 20:35:57.957028  585113 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:35:57.957079  585113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:35:57.971959  585113 system_svc.go:56] duration metric: took 14.916307ms WaitForService to wait for kubelet
	I1205 20:35:57.972000  585113 kubeadm.go:582] duration metric: took 10.638693638s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:35:57.972027  585113 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:35:58.153272  585113 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:35:58.153302  585113 node_conditions.go:123] node cpu capacity is 2
	I1205 20:35:58.153323  585113 node_conditions.go:105] duration metric: took 181.282208ms to run NodePressure ...
	I1205 20:35:58.153338  585113 start.go:241] waiting for startup goroutines ...
	I1205 20:35:58.153348  585113 start.go:246] waiting for cluster config update ...
	I1205 20:35:58.153361  585113 start.go:255] writing updated cluster config ...
	I1205 20:35:58.153689  585113 ssh_runner.go:195] Run: rm -f paused
	I1205 20:35:58.206377  585113 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 20:35:58.208199  585113 out.go:177] * Done! kubectl is now configured to use "embed-certs-789000" cluster and "default" namespace by default
	I1205 20:35:56.626799  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:59.119621  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:59.164803  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:01.663254  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:01.119680  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:03.121023  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:05.121537  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:02.025194  585602 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 20:36:02.025306  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:36:02.025498  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:36:03.664172  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:05.672410  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:07.623229  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:10.119845  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:07.025608  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:36:07.025922  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:36:08.164875  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:10.665374  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:12.622566  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:15.120084  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:13.163662  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:15.164021  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:17.164514  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:17.619629  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:19.620524  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:17.026490  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:36:17.026747  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:36:19.663904  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:22.164514  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:21.621019  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:24.119524  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:24.164932  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:26.670748  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:26.119795  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:27.113870  585025 pod_ready.go:82] duration metric: took 4m0.000886242s for pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace to be "Ready" ...
	E1205 20:36:27.113920  585025 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace to be "Ready" (will not retry!)
	I1205 20:36:27.113943  585025 pod_ready.go:39] duration metric: took 4m14.547292745s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:36:27.113975  585025 kubeadm.go:597] duration metric: took 4m21.939840666s to restartPrimaryControlPlane
	W1205 20:36:27.114068  585025 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 20:36:27.114099  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:36:29.163499  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:29.664158  585929 pod_ready.go:82] duration metric: took 4m0.007168384s for pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace to be "Ready" ...
	E1205 20:36:29.664191  585929 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 20:36:29.664201  585929 pod_ready.go:39] duration metric: took 4m2.00733866s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:36:29.664226  585929 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:36:29.664290  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:36:29.664377  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:36:29.712790  585929 cri.go:89] found id: "83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:29.712814  585929 cri.go:89] found id: "e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:29.712819  585929 cri.go:89] found id: ""
	I1205 20:36:29.712826  585929 logs.go:282] 2 containers: [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36]
	I1205 20:36:29.712879  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.717751  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.721968  585929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:36:29.722045  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:36:29.770289  585929 cri.go:89] found id: "62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:29.770322  585929 cri.go:89] found id: ""
	I1205 20:36:29.770330  585929 logs.go:282] 1 containers: [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff]
	I1205 20:36:29.770392  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.775391  585929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:36:29.775475  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:36:29.816354  585929 cri.go:89] found id: "dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:29.816380  585929 cri.go:89] found id: ""
	I1205 20:36:29.816388  585929 logs.go:282] 1 containers: [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f]
	I1205 20:36:29.816454  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.821546  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:36:29.821621  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:36:29.870442  585929 cri.go:89] found id: "40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:29.870467  585929 cri.go:89] found id: ""
	I1205 20:36:29.870476  585929 logs.go:282] 1 containers: [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d]
	I1205 20:36:29.870541  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.875546  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:36:29.875658  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:36:29.924567  585929 cri.go:89] found id: "444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:29.924595  585929 cri.go:89] found id: ""
	I1205 20:36:29.924603  585929 logs.go:282] 1 containers: [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43]
	I1205 20:36:29.924666  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.929148  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:36:29.929216  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:36:29.968092  585929 cri.go:89] found id: "18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:29.968122  585929 cri.go:89] found id: "587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:29.968126  585929 cri.go:89] found id: ""
	I1205 20:36:29.968134  585929 logs.go:282] 2 containers: [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66]
	I1205 20:36:29.968186  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.973062  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.977693  585929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:36:29.977762  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:36:30.014944  585929 cri.go:89] found id: ""
	I1205 20:36:30.014982  585929 logs.go:282] 0 containers: []
	W1205 20:36:30.014994  585929 logs.go:284] No container was found matching "kindnet"
	I1205 20:36:30.015002  585929 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:36:30.015101  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:36:30.062304  585929 cri.go:89] found id: "e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:30.062328  585929 cri.go:89] found id: "dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:30.062332  585929 cri.go:89] found id: ""
	I1205 20:36:30.062339  585929 logs.go:282] 2 containers: [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c]
	I1205 20:36:30.062394  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:30.067152  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:30.071767  585929 logs.go:123] Gathering logs for kube-apiserver [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d] ...
	I1205 20:36:30.071788  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:30.125030  585929 logs.go:123] Gathering logs for etcd [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff] ...
	I1205 20:36:30.125069  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:30.167607  585929 logs.go:123] Gathering logs for kube-scheduler [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d] ...
	I1205 20:36:30.167641  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:30.217522  585929 logs.go:123] Gathering logs for kube-controller-manager [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c] ...
	I1205 20:36:30.217558  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:30.298655  585929 logs.go:123] Gathering logs for kube-controller-manager [587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66] ...
	I1205 20:36:30.298695  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:30.346687  585929 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:36:30.346721  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:36:30.887069  585929 logs.go:123] Gathering logs for dmesg ...
	I1205 20:36:30.887126  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:36:30.907313  585929 logs.go:123] Gathering logs for kube-apiserver [e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36] ...
	I1205 20:36:30.907360  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:30.950285  585929 logs.go:123] Gathering logs for coredns [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f] ...
	I1205 20:36:30.950326  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:30.990895  585929 logs.go:123] Gathering logs for storage-provisioner [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8] ...
	I1205 20:36:30.990929  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:31.032950  585929 logs.go:123] Gathering logs for kubelet ...
	I1205 20:36:31.033010  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:36:31.115132  585929 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:36:31.115176  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:36:31.257760  585929 logs.go:123] Gathering logs for kube-proxy [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43] ...
	I1205 20:36:31.257797  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:31.300521  585929 logs.go:123] Gathering logs for storage-provisioner [dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c] ...
	I1205 20:36:31.300553  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:31.338339  585929 logs.go:123] Gathering logs for container status ...
	I1205 20:36:31.338373  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:36:33.892406  585929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:36:33.908917  585929 api_server.go:72] duration metric: took 4m14.472283422s to wait for apiserver process to appear ...
	I1205 20:36:33.908950  585929 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:36:33.908993  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:36:33.909067  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:36:33.958461  585929 cri.go:89] found id: "83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:33.958496  585929 cri.go:89] found id: "e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:33.958502  585929 cri.go:89] found id: ""
	I1205 20:36:33.958511  585929 logs.go:282] 2 containers: [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36]
	I1205 20:36:33.958585  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:33.963333  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:33.969472  585929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:36:33.969549  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:36:34.010687  585929 cri.go:89] found id: "62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:34.010711  585929 cri.go:89] found id: ""
	I1205 20:36:34.010721  585929 logs.go:282] 1 containers: [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff]
	I1205 20:36:34.010790  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.016468  585929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:36:34.016557  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:36:34.056627  585929 cri.go:89] found id: "dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:34.056656  585929 cri.go:89] found id: ""
	I1205 20:36:34.056666  585929 logs.go:282] 1 containers: [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f]
	I1205 20:36:34.056729  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.061343  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:36:34.061411  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:36:34.099534  585929 cri.go:89] found id: "40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:34.099563  585929 cri.go:89] found id: ""
	I1205 20:36:34.099573  585929 logs.go:282] 1 containers: [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d]
	I1205 20:36:34.099643  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.104828  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:36:34.104891  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:36:34.150749  585929 cri.go:89] found id: "444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:34.150781  585929 cri.go:89] found id: ""
	I1205 20:36:34.150792  585929 logs.go:282] 1 containers: [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43]
	I1205 20:36:34.150863  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.155718  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:36:34.155797  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:36:34.202896  585929 cri.go:89] found id: "18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:34.202927  585929 cri.go:89] found id: "587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:34.202934  585929 cri.go:89] found id: ""
	I1205 20:36:34.202943  585929 logs.go:282] 2 containers: [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66]
	I1205 20:36:34.203028  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.207791  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.212163  585929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:36:34.212243  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:36:34.254423  585929 cri.go:89] found id: ""
	I1205 20:36:34.254458  585929 logs.go:282] 0 containers: []
	W1205 20:36:34.254470  585929 logs.go:284] No container was found matching "kindnet"
	I1205 20:36:34.254479  585929 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:36:34.254549  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:36:34.294704  585929 cri.go:89] found id: "e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:34.294737  585929 cri.go:89] found id: "dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:34.294741  585929 cri.go:89] found id: ""
	I1205 20:36:34.294753  585929 logs.go:282] 2 containers: [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c]
	I1205 20:36:34.294820  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.299361  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.305411  585929 logs.go:123] Gathering logs for kube-apiserver [e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36] ...
	I1205 20:36:34.305437  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:34.357438  585929 logs.go:123] Gathering logs for etcd [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff] ...
	I1205 20:36:34.357472  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:34.405858  585929 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:36:34.405893  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:36:34.898506  585929 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:36:34.898551  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:36:35.009818  585929 logs.go:123] Gathering logs for coredns [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f] ...
	I1205 20:36:35.009856  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:35.048852  585929 logs.go:123] Gathering logs for kube-controller-manager [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c] ...
	I1205 20:36:35.048882  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:35.100458  585929 logs.go:123] Gathering logs for kube-controller-manager [587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66] ...
	I1205 20:36:35.100511  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:35.139923  585929 logs.go:123] Gathering logs for container status ...
	I1205 20:36:35.139959  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:36:35.184818  585929 logs.go:123] Gathering logs for kubelet ...
	I1205 20:36:35.184852  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:36:35.265196  585929 logs.go:123] Gathering logs for dmesg ...
	I1205 20:36:35.265238  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:36:35.280790  585929 logs.go:123] Gathering logs for kube-proxy [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43] ...
	I1205 20:36:35.280830  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:35.323308  585929 logs.go:123] Gathering logs for storage-provisioner [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8] ...
	I1205 20:36:35.323343  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:35.364578  585929 logs.go:123] Gathering logs for kube-apiserver [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d] ...
	I1205 20:36:35.364610  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:35.411413  585929 logs.go:123] Gathering logs for kube-scheduler [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d] ...
	I1205 20:36:35.411456  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:35.458077  585929 logs.go:123] Gathering logs for storage-provisioner [dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c] ...
	I1205 20:36:35.458117  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:37.997701  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:36:38.003308  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 200:
	ok
	I1205 20:36:38.004465  585929 api_server.go:141] control plane version: v1.31.2
	I1205 20:36:38.004495  585929 api_server.go:131] duration metric: took 4.095536578s to wait for apiserver health ...
	I1205 20:36:38.004505  585929 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:36:38.004532  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:36:38.004598  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:36:37.027599  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:36:37.027910  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:36:38.048388  585929 cri.go:89] found id: "83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:38.048427  585929 cri.go:89] found id: "e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:38.048434  585929 cri.go:89] found id: ""
	I1205 20:36:38.048442  585929 logs.go:282] 2 containers: [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36]
	I1205 20:36:38.048514  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.052931  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.057338  585929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:36:38.057403  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:36:38.097715  585929 cri.go:89] found id: "62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:38.097750  585929 cri.go:89] found id: ""
	I1205 20:36:38.097761  585929 logs.go:282] 1 containers: [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff]
	I1205 20:36:38.097830  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.104038  585929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:36:38.104110  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:36:38.148485  585929 cri.go:89] found id: "dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:38.148510  585929 cri.go:89] found id: ""
	I1205 20:36:38.148519  585929 logs.go:282] 1 containers: [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f]
	I1205 20:36:38.148585  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.153619  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:36:38.153702  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:36:38.190467  585929 cri.go:89] found id: "40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:38.190495  585929 cri.go:89] found id: ""
	I1205 20:36:38.190505  585929 logs.go:282] 1 containers: [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d]
	I1205 20:36:38.190561  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.195177  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:36:38.195259  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:36:38.240020  585929 cri.go:89] found id: "444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:38.240045  585929 cri.go:89] found id: ""
	I1205 20:36:38.240054  585929 logs.go:282] 1 containers: [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43]
	I1205 20:36:38.240123  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.244359  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:36:38.244425  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:36:38.282241  585929 cri.go:89] found id: "18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:38.282267  585929 cri.go:89] found id: "587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:38.282284  585929 cri.go:89] found id: ""
	I1205 20:36:38.282292  585929 logs.go:282] 2 containers: [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66]
	I1205 20:36:38.282357  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.287437  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.291561  585929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:36:38.291621  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:36:38.333299  585929 cri.go:89] found id: ""
	I1205 20:36:38.333335  585929 logs.go:282] 0 containers: []
	W1205 20:36:38.333345  585929 logs.go:284] No container was found matching "kindnet"
	I1205 20:36:38.333352  585929 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:36:38.333411  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:36:38.370920  585929 cri.go:89] found id: "e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:38.370948  585929 cri.go:89] found id: "dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:38.370952  585929 cri.go:89] found id: ""
	I1205 20:36:38.370960  585929 logs.go:282] 2 containers: [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c]
	I1205 20:36:38.371037  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.375549  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.379517  585929 logs.go:123] Gathering logs for kube-controller-manager [587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66] ...
	I1205 20:36:38.379548  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:38.416990  585929 logs.go:123] Gathering logs for kubelet ...
	I1205 20:36:38.417023  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:36:38.499859  585929 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:36:38.499905  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:36:38.625291  585929 logs.go:123] Gathering logs for kube-scheduler [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d] ...
	I1205 20:36:38.625332  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:38.672549  585929 logs.go:123] Gathering logs for coredns [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f] ...
	I1205 20:36:38.672586  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:38.710017  585929 logs.go:123] Gathering logs for storage-provisioner [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8] ...
	I1205 20:36:38.710055  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:38.754004  585929 logs.go:123] Gathering logs for container status ...
	I1205 20:36:38.754049  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:36:38.802163  585929 logs.go:123] Gathering logs for dmesg ...
	I1205 20:36:38.802206  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:36:38.817670  585929 logs.go:123] Gathering logs for kube-apiserver [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d] ...
	I1205 20:36:38.817704  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:38.864833  585929 logs.go:123] Gathering logs for etcd [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff] ...
	I1205 20:36:38.864875  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:38.909490  585929 logs.go:123] Gathering logs for storage-provisioner [dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c] ...
	I1205 20:36:38.909526  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:38.952117  585929 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:36:38.952164  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:36:39.347620  585929 logs.go:123] Gathering logs for kube-apiserver [e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36] ...
	I1205 20:36:39.347686  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:39.392412  585929 logs.go:123] Gathering logs for kube-proxy [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43] ...
	I1205 20:36:39.392450  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:39.433711  585929 logs.go:123] Gathering logs for kube-controller-manager [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c] ...
	I1205 20:36:39.433749  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:41.996602  585929 system_pods.go:59] 8 kube-system pods found
	I1205 20:36:41.996634  585929 system_pods.go:61] "coredns-7c65d6cfc9-5drgc" [4adbcbc8-0974-4ed3-90d4-fc7f75ff83b6] Running
	I1205 20:36:41.996640  585929 system_pods.go:61] "etcd-default-k8s-diff-port-942599" [4041a965-abf4-45b3-a180-118601e72573] Running
	I1205 20:36:41.996644  585929 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-942599" [ae1d7788-4feb-4e02-b0b2-bcaff984ff99] Running
	I1205 20:36:41.996648  585929 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-942599" [5cfb734e-5a10-4066-95a1-b884817a0aea] Running
	I1205 20:36:41.996651  585929 system_pods.go:61] "kube-proxy-5vdcq" [be2e18fd-6980-45c9-87a4-f6d1ed31bf7b] Running
	I1205 20:36:41.996654  585929 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-942599" [8deda727-a6c3-4523-8755-76217f6a8ddb] Running
	I1205 20:36:41.996661  585929 system_pods.go:61] "metrics-server-6867b74b74-rq8xm" [99b577fd-fbfd-4178-8b06-ef96f118c30b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:36:41.996665  585929 system_pods.go:61] "storage-provisioner" [8a858ec2-dc10-4501-8efa-72e2ea0c7927] Running
	I1205 20:36:41.996674  585929 system_pods.go:74] duration metric: took 3.992162062s to wait for pod list to return data ...
	I1205 20:36:41.996682  585929 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:36:41.999553  585929 default_sa.go:45] found service account: "default"
	I1205 20:36:41.999580  585929 default_sa.go:55] duration metric: took 2.889197ms for default service account to be created ...
	I1205 20:36:41.999589  585929 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:36:42.005061  585929 system_pods.go:86] 8 kube-system pods found
	I1205 20:36:42.005099  585929 system_pods.go:89] "coredns-7c65d6cfc9-5drgc" [4adbcbc8-0974-4ed3-90d4-fc7f75ff83b6] Running
	I1205 20:36:42.005111  585929 system_pods.go:89] "etcd-default-k8s-diff-port-942599" [4041a965-abf4-45b3-a180-118601e72573] Running
	I1205 20:36:42.005118  585929 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-942599" [ae1d7788-4feb-4e02-b0b2-bcaff984ff99] Running
	I1205 20:36:42.005126  585929 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-942599" [5cfb734e-5a10-4066-95a1-b884817a0aea] Running
	I1205 20:36:42.005135  585929 system_pods.go:89] "kube-proxy-5vdcq" [be2e18fd-6980-45c9-87a4-f6d1ed31bf7b] Running
	I1205 20:36:42.005143  585929 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-942599" [8deda727-a6c3-4523-8755-76217f6a8ddb] Running
	I1205 20:36:42.005159  585929 system_pods.go:89] "metrics-server-6867b74b74-rq8xm" [99b577fd-fbfd-4178-8b06-ef96f118c30b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:36:42.005171  585929 system_pods.go:89] "storage-provisioner" [8a858ec2-dc10-4501-8efa-72e2ea0c7927] Running
	I1205 20:36:42.005187  585929 system_pods.go:126] duration metric: took 5.591652ms to wait for k8s-apps to be running ...
	I1205 20:36:42.005201  585929 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:36:42.005267  585929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:36:42.021323  585929 system_svc.go:56] duration metric: took 16.10852ms WaitForService to wait for kubelet
	I1205 20:36:42.021358  585929 kubeadm.go:582] duration metric: took 4m22.584731606s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:36:42.021424  585929 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:36:42.024632  585929 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:36:42.024658  585929 node_conditions.go:123] node cpu capacity is 2
	I1205 20:36:42.024682  585929 node_conditions.go:105] duration metric: took 3.248548ms to run NodePressure ...
	I1205 20:36:42.024698  585929 start.go:241] waiting for startup goroutines ...
	I1205 20:36:42.024709  585929 start.go:246] waiting for cluster config update ...
	I1205 20:36:42.024742  585929 start.go:255] writing updated cluster config ...
	I1205 20:36:42.025047  585929 ssh_runner.go:195] Run: rm -f paused
	I1205 20:36:42.077303  585929 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 20:36:42.079398  585929 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-942599" cluster and "default" namespace by default
	I1205 20:36:53.411276  585025 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.297141231s)
	I1205 20:36:53.411423  585025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:36:53.432474  585025 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:36:53.443908  585025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:36:53.454789  585025 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:36:53.454821  585025 kubeadm.go:157] found existing configuration files:
	
	I1205 20:36:53.454873  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:36:53.465648  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:36:53.465719  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:36:53.476492  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:36:53.486436  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:36:53.486505  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:36:53.499146  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:36:53.510237  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:36:53.510324  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:36:53.521186  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:36:53.531797  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:36:53.531890  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:36:53.543056  585025 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:36:53.735019  585025 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:37:01.531096  585025 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 20:37:01.531179  585025 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:37:01.531278  585025 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:37:01.531407  585025 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:37:01.531546  585025 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 20:37:01.531635  585025 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:37:01.533284  585025 out.go:235]   - Generating certificates and keys ...
	I1205 20:37:01.533400  585025 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:37:01.533484  585025 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:37:01.533589  585025 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:37:01.533676  585025 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:37:01.533741  585025 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:37:01.533820  585025 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 20:37:01.533901  585025 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:37:01.533954  585025 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:37:01.534023  585025 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:37:01.534097  585025 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:37:01.534137  585025 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 20:37:01.534193  585025 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:37:01.534264  585025 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:37:01.534347  585025 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 20:37:01.534414  585025 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:37:01.534479  585025 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:37:01.534529  585025 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:37:01.534600  585025 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:37:01.534656  585025 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:37:01.536208  585025 out.go:235]   - Booting up control plane ...
	I1205 20:37:01.536326  585025 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:37:01.536394  585025 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:37:01.536487  585025 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:37:01.536653  585025 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:37:01.536772  585025 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:37:01.536814  585025 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:37:01.536987  585025 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 20:37:01.537144  585025 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 20:37:01.537240  585025 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.640403ms
	I1205 20:37:01.537352  585025 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 20:37:01.537438  585025 kubeadm.go:310] [api-check] The API server is healthy after 5.002069704s
	I1205 20:37:01.537566  585025 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:37:01.537705  585025 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:37:01.537766  585025 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:37:01.537959  585025 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-816185 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:37:01.538037  585025 kubeadm.go:310] [bootstrap-token] Using token: l8cx4j.koqnwrdaqrc08irs
	I1205 20:37:01.539683  585025 out.go:235]   - Configuring RBAC rules ...
	I1205 20:37:01.539813  585025 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:37:01.539945  585025 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:37:01.540157  585025 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:37:01.540346  585025 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:37:01.540482  585025 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:37:01.540602  585025 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:37:01.540746  585025 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:37:01.540818  585025 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 20:37:01.540905  585025 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 20:37:01.540922  585025 kubeadm.go:310] 
	I1205 20:37:01.541012  585025 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 20:37:01.541027  585025 kubeadm.go:310] 
	I1205 20:37:01.541149  585025 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 20:37:01.541160  585025 kubeadm.go:310] 
	I1205 20:37:01.541197  585025 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 20:37:01.541253  585025 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:37:01.541297  585025 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:37:01.541303  585025 kubeadm.go:310] 
	I1205 20:37:01.541365  585025 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 20:37:01.541371  585025 kubeadm.go:310] 
	I1205 20:37:01.541417  585025 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:37:01.541427  585025 kubeadm.go:310] 
	I1205 20:37:01.541486  585025 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 20:37:01.541593  585025 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:37:01.541689  585025 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:37:01.541707  585025 kubeadm.go:310] 
	I1205 20:37:01.541811  585025 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:37:01.541917  585025 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 20:37:01.541928  585025 kubeadm.go:310] 
	I1205 20:37:01.542020  585025 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token l8cx4j.koqnwrdaqrc08irs \
	I1205 20:37:01.542138  585025 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 \
	I1205 20:37:01.542171  585025 kubeadm.go:310] 	--control-plane 
	I1205 20:37:01.542180  585025 kubeadm.go:310] 
	I1205 20:37:01.542264  585025 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:37:01.542283  585025 kubeadm.go:310] 
	I1205 20:37:01.542407  585025 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token l8cx4j.koqnwrdaqrc08irs \
	I1205 20:37:01.542513  585025 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 
	I1205 20:37:01.542530  585025 cni.go:84] Creating CNI manager for ""
	I1205 20:37:01.542538  585025 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:37:01.543967  585025 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:37:01.545652  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:37:01.557890  585025 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:37:01.577447  585025 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:37:01.577532  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-816185 minikube.k8s.io/updated_at=2024_12_05T20_37_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=no-preload-816185 minikube.k8s.io/primary=true
	I1205 20:37:01.577542  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:01.618121  585025 ops.go:34] apiserver oom_adj: -16
	I1205 20:37:01.806825  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:02.307212  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:02.807893  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:03.307202  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:03.806891  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:04.307571  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:04.807485  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:05.307695  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:05.387751  585025 kubeadm.go:1113] duration metric: took 3.810307917s to wait for elevateKubeSystemPrivileges
	I1205 20:37:05.387790  585025 kubeadm.go:394] duration metric: took 5m0.269375789s to StartCluster
	I1205 20:37:05.387810  585025 settings.go:142] acquiring lock: {Name:mk53b9e6d652790a330d8f10370186624dd74692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:37:05.387891  585025 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:37:05.389703  585025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:37:05.389984  585025 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.37 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:37:05.390056  585025 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 20:37:05.390179  585025 config.go:182] Loaded profile config "no-preload-816185": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:37:05.390193  585025 addons.go:69] Setting storage-provisioner=true in profile "no-preload-816185"
	I1205 20:37:05.390216  585025 addons.go:69] Setting default-storageclass=true in profile "no-preload-816185"
	I1205 20:37:05.390246  585025 addons.go:69] Setting metrics-server=true in profile "no-preload-816185"
	I1205 20:37:05.390281  585025 addons.go:234] Setting addon metrics-server=true in "no-preload-816185"
	W1205 20:37:05.390295  585025 addons.go:243] addon metrics-server should already be in state true
	I1205 20:37:05.390340  585025 host.go:66] Checking if "no-preload-816185" exists ...
	I1205 20:37:05.390255  585025 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-816185"
	I1205 20:37:05.390263  585025 addons.go:234] Setting addon storage-provisioner=true in "no-preload-816185"
	W1205 20:37:05.390463  585025 addons.go:243] addon storage-provisioner should already be in state true
	I1205 20:37:05.390533  585025 host.go:66] Checking if "no-preload-816185" exists ...
	I1205 20:37:05.390844  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.390888  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.390852  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.390947  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.390973  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.391032  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.391810  585025 out.go:177] * Verifying Kubernetes components...
	I1205 20:37:05.393274  585025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:37:05.408078  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40259
	I1205 20:37:05.408366  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I1205 20:37:05.408765  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.408780  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.409315  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.409337  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.409441  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.409465  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.409767  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.409800  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.409941  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetState
	I1205 20:37:05.410249  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42147
	I1205 20:37:05.410487  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.410537  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.410753  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.411387  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.411412  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.411847  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.412515  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.412565  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.413770  585025 addons.go:234] Setting addon default-storageclass=true in "no-preload-816185"
	W1205 20:37:05.413796  585025 addons.go:243] addon default-storageclass should already be in state true
	I1205 20:37:05.413828  585025 host.go:66] Checking if "no-preload-816185" exists ...
	I1205 20:37:05.414184  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.414231  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.430214  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33287
	I1205 20:37:05.430684  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.431260  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.431286  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.431697  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.431929  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetState
	I1205 20:37:05.432941  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36939
	I1205 20:37:05.433361  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.433835  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.433855  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.433933  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:37:05.434385  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.434596  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetState
	I1205 20:37:05.434638  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37163
	I1205 20:37:05.435193  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.435667  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.435694  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.435994  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.436000  585025 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:37:05.436635  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.436657  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:37:05.436683  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.437421  585025 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:37:05.437441  585025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:37:05.437461  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:37:05.438221  585025 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:37:05.439704  585025 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:37:05.439721  585025 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:37:05.439737  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:37:05.440522  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.441031  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:37:05.441058  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.441198  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:37:05.441352  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:37:05.441458  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:37:05.441582  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:37:05.445842  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.446223  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:37:05.446248  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.446449  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:37:05.446661  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:37:05.446806  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:37:05.446923  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:37:05.472870  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38029
	I1205 20:37:05.473520  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.474053  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.474080  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.474456  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.474666  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetState
	I1205 20:37:05.476603  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:37:05.476836  585025 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:37:05.476859  585025 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:37:05.476886  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:37:05.480063  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.480546  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:37:05.480580  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.480941  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:37:05.481175  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:37:05.481331  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:37:05.481425  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:37:05.607284  585025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:37:05.627090  585025 node_ready.go:35] waiting up to 6m0s for node "no-preload-816185" to be "Ready" ...
	I1205 20:37:05.637577  585025 node_ready.go:49] node "no-preload-816185" has status "Ready":"True"
	I1205 20:37:05.637602  585025 node_ready.go:38] duration metric: took 10.476209ms for node "no-preload-816185" to be "Ready" ...
	I1205 20:37:05.637611  585025 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:37:05.642969  585025 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:05.696662  585025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:37:05.725276  585025 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:37:05.725309  585025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:37:05.779102  585025 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:37:05.779137  585025 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:37:05.814495  585025 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:37:05.814531  585025 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:37:05.823828  585025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:37:05.863152  585025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:37:05.948854  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:05.948895  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:05.949242  585025 main.go:141] libmachine: (no-preload-816185) DBG | Closing plugin on server side
	I1205 20:37:05.949266  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:05.949275  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:05.949294  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:05.949302  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:05.949590  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:05.949601  585025 main.go:141] libmachine: (no-preload-816185) DBG | Closing plugin on server side
	I1205 20:37:05.949612  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:05.975655  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:05.975683  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:05.975962  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:05.975978  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:07.004027  585025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.180164032s)
	I1205 20:37:07.004103  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:07.004117  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:07.004498  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:07.004520  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:07.004535  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:07.004545  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:07.004802  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:07.004820  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:07.208032  585025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.344819218s)
	I1205 20:37:07.208143  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:07.208159  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:07.208537  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:07.208556  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:07.208566  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:07.208573  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:07.208846  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:07.208860  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:07.208871  585025 addons.go:475] Verifying addon metrics-server=true in "no-preload-816185"
	I1205 20:37:07.210487  585025 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:37:07.212093  585025 addons.go:510] duration metric: took 1.822047986s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:37:07.658678  585025 pod_ready.go:103] pod "etcd-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:37:08.156061  585025 pod_ready.go:93] pod "etcd-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:08.156094  585025 pod_ready.go:82] duration metric: took 2.513098547s for pod "etcd-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:08.156109  585025 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:10.162704  585025 pod_ready.go:103] pod "kube-apiserver-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:37:12.163550  585025 pod_ready.go:93] pod "kube-apiserver-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:12.163578  585025 pod_ready.go:82] duration metric: took 4.007461295s for pod "kube-apiserver-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:12.163601  585025 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:12.169123  585025 pod_ready.go:93] pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:12.169155  585025 pod_ready.go:82] duration metric: took 5.544964ms for pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:12.169170  585025 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:14.175288  585025 pod_ready.go:103] pod "kube-scheduler-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:37:14.676107  585025 pod_ready.go:93] pod "kube-scheduler-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:14.676137  585025 pod_ready.go:82] duration metric: took 2.506959209s for pod "kube-scheduler-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:14.676146  585025 pod_ready.go:39] duration metric: took 9.038525731s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:37:14.676165  585025 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:37:14.676222  585025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:37:14.692508  585025 api_server.go:72] duration metric: took 9.302489277s to wait for apiserver process to appear ...
	I1205 20:37:14.692540  585025 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:37:14.692562  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:37:14.697176  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 200:
	ok
	I1205 20:37:14.698320  585025 api_server.go:141] control plane version: v1.31.2
	I1205 20:37:14.698345  585025 api_server.go:131] duration metric: took 5.796971ms to wait for apiserver health ...
	I1205 20:37:14.698357  585025 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:37:14.706456  585025 system_pods.go:59] 9 kube-system pods found
	I1205 20:37:14.706503  585025 system_pods.go:61] "coredns-7c65d6cfc9-fmcnh" [fb6a91c8-af65-4fb6-af77-0a6c45d224a7] Running
	I1205 20:37:14.706512  585025 system_pods.go:61] "coredns-7c65d6cfc9-gmc2j" [2bfc0f96-5ad3-42c7-ab2c-4a29cbeab20f] Running
	I1205 20:37:14.706518  585025 system_pods.go:61] "etcd-no-preload-816185" [b647e785-c865-47d9-9215-4b92783df8f0] Running
	I1205 20:37:14.706524  585025 system_pods.go:61] "kube-apiserver-no-preload-816185" [a4d257bd-3d3b-4833-9edd-7a7f764d9482] Running
	I1205 20:37:14.706529  585025 system_pods.go:61] "kube-controller-manager-no-preload-816185" [0487e25d-77df-4ab1-81a0-18c09d1b7f60] Running
	I1205 20:37:14.706534  585025 system_pods.go:61] "kube-proxy-q8thq" [8be5b50a-e564-4d80-82c4-357db41a3c1e] Running
	I1205 20:37:14.706539  585025 system_pods.go:61] "kube-scheduler-no-preload-816185" [187898da-a8e3-4ce1-9f70-d581133bef49] Running
	I1205 20:37:14.706549  585025 system_pods.go:61] "metrics-server-6867b74b74-8vmd6" [d838e6e3-bd74-4653-9289-4f5375b03d4f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:37:14.706555  585025 system_pods.go:61] "storage-provisioner" [7f33e249-9330-428f-8feb-9f3cf44369be] Running
	I1205 20:37:14.706565  585025 system_pods.go:74] duration metric: took 8.200516ms to wait for pod list to return data ...
	I1205 20:37:14.706577  585025 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:37:14.716217  585025 default_sa.go:45] found service account: "default"
	I1205 20:37:14.716259  585025 default_sa.go:55] duration metric: took 9.664045ms for default service account to be created ...
	I1205 20:37:14.716293  585025 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:37:14.723293  585025 system_pods.go:86] 9 kube-system pods found
	I1205 20:37:14.723323  585025 system_pods.go:89] "coredns-7c65d6cfc9-fmcnh" [fb6a91c8-af65-4fb6-af77-0a6c45d224a7] Running
	I1205 20:37:14.723329  585025 system_pods.go:89] "coredns-7c65d6cfc9-gmc2j" [2bfc0f96-5ad3-42c7-ab2c-4a29cbeab20f] Running
	I1205 20:37:14.723333  585025 system_pods.go:89] "etcd-no-preload-816185" [b647e785-c865-47d9-9215-4b92783df8f0] Running
	I1205 20:37:14.723337  585025 system_pods.go:89] "kube-apiserver-no-preload-816185" [a4d257bd-3d3b-4833-9edd-7a7f764d9482] Running
	I1205 20:37:14.723342  585025 system_pods.go:89] "kube-controller-manager-no-preload-816185" [0487e25d-77df-4ab1-81a0-18c09d1b7f60] Running
	I1205 20:37:14.723346  585025 system_pods.go:89] "kube-proxy-q8thq" [8be5b50a-e564-4d80-82c4-357db41a3c1e] Running
	I1205 20:37:14.723349  585025 system_pods.go:89] "kube-scheduler-no-preload-816185" [187898da-a8e3-4ce1-9f70-d581133bef49] Running
	I1205 20:37:14.723355  585025 system_pods.go:89] "metrics-server-6867b74b74-8vmd6" [d838e6e3-bd74-4653-9289-4f5375b03d4f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:37:14.723360  585025 system_pods.go:89] "storage-provisioner" [7f33e249-9330-428f-8feb-9f3cf44369be] Running
	I1205 20:37:14.723368  585025 system_pods.go:126] duration metric: took 7.067824ms to wait for k8s-apps to be running ...
	I1205 20:37:14.723375  585025 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:37:14.723422  585025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:37:14.744142  585025 system_svc.go:56] duration metric: took 20.751867ms WaitForService to wait for kubelet
	I1205 20:37:14.744179  585025 kubeadm.go:582] duration metric: took 9.354165706s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:37:14.744200  585025 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:37:14.751985  585025 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:37:14.752026  585025 node_conditions.go:123] node cpu capacity is 2
	I1205 20:37:14.752043  585025 node_conditions.go:105] duration metric: took 7.836665ms to run NodePressure ...
	I1205 20:37:14.752069  585025 start.go:241] waiting for startup goroutines ...
	I1205 20:37:14.752081  585025 start.go:246] waiting for cluster config update ...
	I1205 20:37:14.752095  585025 start.go:255] writing updated cluster config ...
	I1205 20:37:14.752490  585025 ssh_runner.go:195] Run: rm -f paused
	I1205 20:37:14.806583  585025 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 20:37:14.808574  585025 out.go:177] * Done! kubectl is now configured to use "no-preload-816185" cluster and "default" namespace by default
	I1205 20:37:17.029681  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:37:17.029940  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:37:17.029963  585602 kubeadm.go:310] 
	I1205 20:37:17.030022  585602 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 20:37:17.030101  585602 kubeadm.go:310] 		timed out waiting for the condition
	I1205 20:37:17.030128  585602 kubeadm.go:310] 
	I1205 20:37:17.030167  585602 kubeadm.go:310] 	This error is likely caused by:
	I1205 20:37:17.030209  585602 kubeadm.go:310] 		- The kubelet is not running
	I1205 20:37:17.030353  585602 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 20:37:17.030369  585602 kubeadm.go:310] 
	I1205 20:37:17.030489  585602 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 20:37:17.030540  585602 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 20:37:17.030584  585602 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 20:37:17.030594  585602 kubeadm.go:310] 
	I1205 20:37:17.030733  585602 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 20:37:17.030843  585602 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 20:37:17.030855  585602 kubeadm.go:310] 
	I1205 20:37:17.031025  585602 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 20:37:17.031154  585602 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 20:37:17.031268  585602 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 20:37:17.031374  585602 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 20:37:17.031386  585602 kubeadm.go:310] 
	I1205 20:37:17.032368  585602 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:37:17.032493  585602 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 20:37:17.032562  585602 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1205 20:37:17.032709  585602 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 20:37:17.032762  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:37:17.518572  585602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:37:17.533868  585602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:37:17.547199  585602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:37:17.547224  585602 kubeadm.go:157] found existing configuration files:
	
	I1205 20:37:17.547272  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:37:17.556733  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:37:17.556801  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:37:17.566622  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:37:17.577044  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:37:17.577121  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:37:17.588726  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:37:17.599269  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:37:17.599346  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:37:17.609243  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:37:17.618947  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:37:17.619034  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:37:17.629228  585602 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:37:17.878785  585602 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:39:13.972213  585602 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 20:39:13.972379  585602 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1205 20:39:13.973936  585602 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 20:39:13.974035  585602 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:39:13.974150  585602 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:39:13.974251  585602 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:39:13.974341  585602 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:39:13.974404  585602 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:39:13.976164  585602 out.go:235]   - Generating certificates and keys ...
	I1205 20:39:13.976248  585602 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:39:13.976339  585602 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:39:13.976449  585602 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:39:13.976538  585602 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:39:13.976642  585602 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:39:13.976736  585602 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 20:39:13.976832  585602 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:39:13.976924  585602 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:39:13.977025  585602 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:39:13.977131  585602 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:39:13.977189  585602 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 20:39:13.977272  585602 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:39:13.977389  585602 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:39:13.977474  585602 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:39:13.977566  585602 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:39:13.977650  585602 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:39:13.977776  585602 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:39:13.977901  585602 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:39:13.977976  585602 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:39:13.978137  585602 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:39:13.979473  585602 out.go:235]   - Booting up control plane ...
	I1205 20:39:13.979581  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:39:13.979664  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:39:13.979732  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:39:13.979803  585602 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:39:13.979952  585602 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:39:13.980017  585602 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 20:39:13.980107  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.980396  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.980511  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.980744  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.980843  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.981116  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.981227  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.981439  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.981528  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.981718  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.981731  585602 kubeadm.go:310] 
	I1205 20:39:13.981773  585602 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 20:39:13.981831  585602 kubeadm.go:310] 		timed out waiting for the condition
	I1205 20:39:13.981839  585602 kubeadm.go:310] 
	I1205 20:39:13.981888  585602 kubeadm.go:310] 	This error is likely caused by:
	I1205 20:39:13.981941  585602 kubeadm.go:310] 		- The kubelet is not running
	I1205 20:39:13.982052  585602 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 20:39:13.982059  585602 kubeadm.go:310] 
	I1205 20:39:13.982144  585602 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 20:39:13.982174  585602 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 20:39:13.982208  585602 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 20:39:13.982215  585602 kubeadm.go:310] 
	I1205 20:39:13.982302  585602 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 20:39:13.982415  585602 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 20:39:13.982431  585602 kubeadm.go:310] 
	I1205 20:39:13.982540  585602 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 20:39:13.982618  585602 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 20:39:13.982701  585602 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 20:39:13.982766  585602 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 20:39:13.982839  585602 kubeadm.go:310] 
	I1205 20:39:13.982855  585602 kubeadm.go:394] duration metric: took 7m58.414377536s to StartCluster
	I1205 20:39:13.982907  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:39:13.982975  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:39:14.031730  585602 cri.go:89] found id: ""
	I1205 20:39:14.031767  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.031779  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:39:14.031791  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:39:14.031865  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:39:14.068372  585602 cri.go:89] found id: ""
	I1205 20:39:14.068420  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.068433  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:39:14.068440  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:39:14.068512  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:39:14.106807  585602 cri.go:89] found id: ""
	I1205 20:39:14.106837  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.106847  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:39:14.106856  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:39:14.106930  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:39:14.144926  585602 cri.go:89] found id: ""
	I1205 20:39:14.144952  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.144960  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:39:14.144974  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:39:14.145052  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:39:14.182712  585602 cri.go:89] found id: ""
	I1205 20:39:14.182742  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.182754  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:39:14.182762  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:39:14.182826  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:39:14.220469  585602 cri.go:89] found id: ""
	I1205 20:39:14.220505  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.220519  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:39:14.220527  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:39:14.220593  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:39:14.269791  585602 cri.go:89] found id: ""
	I1205 20:39:14.269823  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.269835  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:39:14.269842  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:39:14.269911  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:39:14.313406  585602 cri.go:89] found id: ""
	I1205 20:39:14.313439  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.313450  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:39:14.313464  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:39:14.313483  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:39:14.330488  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:39:14.330526  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:39:14.417358  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:39:14.417403  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:39:14.417421  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:39:14.530226  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:39:14.530270  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:39:14.585471  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:39:14.585512  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 20:39:14.636389  585602 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1205 20:39:14.636456  585602 out.go:270] * 
	W1205 20:39:14.636535  585602 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 20:39:14.636549  585602 out.go:270] * 
	W1205 20:39:14.637475  585602 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 20:39:14.640654  585602 out.go:201] 
	W1205 20:39:14.641873  585602 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 20:39:14.641931  585602 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 20:39:14.641975  585602 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 20:39:14.643389  585602 out.go:201] 
	
	
	==> CRI-O <==
	Dec 05 20:45:00 embed-certs-789000 crio[718]: time="2024-12-05 20:45:00.454488542Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=14686b1f-fea6-4b3c-808e-6d238ca93931 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:45:00 embed-certs-789000 crio[718]: time="2024-12-05 20:45:00.455664687Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=545da164-c7c4-4bc9-a4bd-c9b3d462a119 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:45:00 embed-certs-789000 crio[718]: time="2024-12-05 20:45:00.456062438Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431500456038123,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=545da164-c7c4-4bc9-a4bd-c9b3d462a119 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:45:00 embed-certs-789000 crio[718]: time="2024-12-05 20:45:00.456654052Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=01b1ae26-06d9-4ec6-aae0-77f8326d4b95 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:45:00 embed-certs-789000 crio[718]: time="2024-12-05 20:45:00.456723761Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=01b1ae26-06d9-4ec6-aae0-77f8326d4b95 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:45:00 embed-certs-789000 crio[718]: time="2024-12-05 20:45:00.456999755Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3590f9508a3b09c552a77ad99852b72a135a2ec395476bf71cac9cba129609b,PodSandboxId:667ddfeba1da3a7fe58f2d2a2b29adf71c9ee53253c97c12c0005a1e9578c25e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430949452646939,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2808c8da-8904-45a0-ae68-bfd68681540f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b462b77c174cac4d84d74fc00ce85e02aaef3f18d3808b44546bf941ac0cb1c,PodSandboxId:a26a5b0e3d29568b7bd6f0f497008d92d2a12fa6ab1f1ce3b8dbc9cc5136cff2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430949030934095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rh6pj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bdd8a47-abec-4dc4-a1ed-4a9a124417a3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0b5402930c1ec85b6c1e1b26d2c9ae3690ece4afc292821db305a7153157e1,PodSandboxId:a3bd07f6e1d967b7e5db80edef61810dce74c7d2527a4a8317756c3408391e50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430948836480376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6mp2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
1aaefd9-c549-4065-b3dd-a0e4d925e592,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48332dd170fb8de47499e97df295d87bd84b9b1168d8de60ad34389930087b21,PodSandboxId:00e986108a7e8293ce3923847989281cc8c71c7385847293b744a5058aa9f6ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733430947399651285,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-znjpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3df1a22-d7e0-4a83-84dd-0e710185ded6,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c198cf9812acaa3935b068fb6be235089141a68ab9f7163d841c3efd8f50de,PodSandboxId:54120b1ea76b62e9483d05b8886c083497d48bc5ed72f841331de381baf99b68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430937122232254
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af9a21bfb03bc31f2f91411d7d8bd82,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb5ec3a2c4a30d4f224867a4150377f1dbee0b64a84ec5b60995cbad230dbd0,PodSandboxId:2d504d8e3573c07d52511c9893b2a60eb84d81276fb9d93a890b85d5d772c271,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430937117
836470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d9c4239ce8abe6c3eb5781fffc7f358,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab61f60cbe2c60f64877fe93a63f225ae437aa73fa09c4e1c45066805ed0c55,PodSandboxId:dfeb72c01827e0b29ce2360805da81526b7e12237ef9be36d577b1d74e30bae5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430937097155965,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f68d1f88de87c2553b2b0d9b84e5dd72,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:217f0ccb4526a1479f5a4dab73685aef7aba41064c97a4b142e0b617c510b39c,PodSandboxId:75d0129543712bbcc085162b673b18e317596cd72591b938f561e8633fb7feb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430937001245655,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d150fe239f3ab0d40ea6589f44553acb,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c6cf0dd68ac2fdf5f39b36f0c8463645f13569af5dfd13d8db86ce45446171a,PodSandboxId:38d7aa2c1d75ef87b678906920ead810edfd47f8bd957bf7d0a1d4073314f23d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733430652292743904,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d150fe239f3ab0d40ea6589f44553acb,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=01b1ae26-06d9-4ec6-aae0-77f8326d4b95 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:45:00 embed-certs-789000 crio[718]: time="2024-12-05 20:45:00.494117519Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=552013cf-d17e-4734-b816-09d10cfe199d name=/runtime.v1.RuntimeService/Version
	Dec 05 20:45:00 embed-certs-789000 crio[718]: time="2024-12-05 20:45:00.494191450Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=552013cf-d17e-4734-b816-09d10cfe199d name=/runtime.v1.RuntimeService/Version
	Dec 05 20:45:00 embed-certs-789000 crio[718]: time="2024-12-05 20:45:00.495497902Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a35b715a-89d1-482e-9f6f-fa9eb5c20893 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:45:00 embed-certs-789000 crio[718]: time="2024-12-05 20:45:00.495865861Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431500495845525,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a35b715a-89d1-482e-9f6f-fa9eb5c20893 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:45:00 embed-certs-789000 crio[718]: time="2024-12-05 20:45:00.496497868Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a1c928c-09ec-4f13-8f99-d4236c13e985 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:45:00 embed-certs-789000 crio[718]: time="2024-12-05 20:45:00.496549767Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a1c928c-09ec-4f13-8f99-d4236c13e985 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:45:00 embed-certs-789000 crio[718]: time="2024-12-05 20:45:00.496763370Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3590f9508a3b09c552a77ad99852b72a135a2ec395476bf71cac9cba129609b,PodSandboxId:667ddfeba1da3a7fe58f2d2a2b29adf71c9ee53253c97c12c0005a1e9578c25e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430949452646939,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2808c8da-8904-45a0-ae68-bfd68681540f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b462b77c174cac4d84d74fc00ce85e02aaef3f18d3808b44546bf941ac0cb1c,PodSandboxId:a26a5b0e3d29568b7bd6f0f497008d92d2a12fa6ab1f1ce3b8dbc9cc5136cff2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430949030934095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rh6pj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bdd8a47-abec-4dc4-a1ed-4a9a124417a3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0b5402930c1ec85b6c1e1b26d2c9ae3690ece4afc292821db305a7153157e1,PodSandboxId:a3bd07f6e1d967b7e5db80edef61810dce74c7d2527a4a8317756c3408391e50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430948836480376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6mp2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
1aaefd9-c549-4065-b3dd-a0e4d925e592,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48332dd170fb8de47499e97df295d87bd84b9b1168d8de60ad34389930087b21,PodSandboxId:00e986108a7e8293ce3923847989281cc8c71c7385847293b744a5058aa9f6ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733430947399651285,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-znjpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3df1a22-d7e0-4a83-84dd-0e710185ded6,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c198cf9812acaa3935b068fb6be235089141a68ab9f7163d841c3efd8f50de,PodSandboxId:54120b1ea76b62e9483d05b8886c083497d48bc5ed72f841331de381baf99b68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430937122232254
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af9a21bfb03bc31f2f91411d7d8bd82,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb5ec3a2c4a30d4f224867a4150377f1dbee0b64a84ec5b60995cbad230dbd0,PodSandboxId:2d504d8e3573c07d52511c9893b2a60eb84d81276fb9d93a890b85d5d772c271,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430937117
836470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d9c4239ce8abe6c3eb5781fffc7f358,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab61f60cbe2c60f64877fe93a63f225ae437aa73fa09c4e1c45066805ed0c55,PodSandboxId:dfeb72c01827e0b29ce2360805da81526b7e12237ef9be36d577b1d74e30bae5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430937097155965,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f68d1f88de87c2553b2b0d9b84e5dd72,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:217f0ccb4526a1479f5a4dab73685aef7aba41064c97a4b142e0b617c510b39c,PodSandboxId:75d0129543712bbcc085162b673b18e317596cd72591b938f561e8633fb7feb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430937001245655,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d150fe239f3ab0d40ea6589f44553acb,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c6cf0dd68ac2fdf5f39b36f0c8463645f13569af5dfd13d8db86ce45446171a,PodSandboxId:38d7aa2c1d75ef87b678906920ead810edfd47f8bd957bf7d0a1d4073314f23d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733430652292743904,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d150fe239f3ab0d40ea6589f44553acb,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0a1c928c-09ec-4f13-8f99-d4236c13e985 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:45:00 embed-certs-789000 crio[718]: time="2024-12-05 20:45:00.526879837Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=37ef0e39-390f-4a92-a29b-a214de34f5d1 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 05 20:45:00 embed-certs-789000 crio[718]: time="2024-12-05 20:45:00.527206164Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a731d56bed7b5645161ab76fd825729cf9257a516ff9945481de6bc1804b2862,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-cs42k,Uid:98b266c3-8ff0-4dc6-9c43-374dcd7c074a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733430949504134830,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-cs42k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b266c3-8ff0-4dc6-9c43-374dcd7c074a,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T20:35:48.894189510Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:667ddfeba1da3a7fe58f2d2a2b29adf71c9ee53253c97c12c0005a1e9578c25e,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:2808c8da-8904-45a0-ae68-bfd68681540f,N
amespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733430949296764922,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2808c8da-8904-45a0-ae68-bfd68681540f,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"vol
umes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-12-05T20:35:48.689652658Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a3bd07f6e1d967b7e5db80edef61810dce74c7d2527a4a8317756c3408391e50,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-6mp2h,Uid:01aaefd9-c549-4065-b3dd-a0e4d925e592,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733430947949001205,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-6mp2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01aaefd9-c549-4065-b3dd-a0e4d925e592,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T20:35:47.638522047Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a26a5b0e3d29568b7bd6f0f497008d92d2a12fa6ab1f1ce3b8dbc9cc5136cff2,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-rh6pj,Uid:4bdd8a47-abec-4dc4
-a1ed-4a9a124417a3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733430947924856649,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-rh6pj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bdd8a47-abec-4dc4-a1ed-4a9a124417a3,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T20:35:47.611915005Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:00e986108a7e8293ce3923847989281cc8c71c7385847293b744a5058aa9f6ab,Metadata:&PodSandboxMetadata{Name:kube-proxy-znjpk,Uid:f3df1a22-d7e0-4a83-84dd-0e710185ded6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733430947247488570,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-znjpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3df1a22-d7e0-4a83-84dd-0e710185ded6,k8s-app: kube-proxy,pod-tem
plate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T20:35:46.935918947Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2d504d8e3573c07d52511c9893b2a60eb84d81276fb9d93a890b85d5d772c271,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-789000,Uid:7d9c4239ce8abe6c3eb5781fffc7f358,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733430936851845549,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d9c4239ce8abe6c3eb5781fffc7f358,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7d9c4239ce8abe6c3eb5781fffc7f358,kubernetes.io/config.seen: 2024-12-05T20:35:36.399330551Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:54120b1ea76b62e9483d05b8886c083497d48bc5ed72f841331de381baf99b68,Metadata:&PodSandboxMetadata{Name:kube-controlle
r-manager-embed-certs-789000,Uid:1af9a21bfb03bc31f2f91411d7d8bd82,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733430936846371560,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af9a21bfb03bc31f2f91411d7d8bd82,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1af9a21bfb03bc31f2f91411d7d8bd82,kubernetes.io/config.seen: 2024-12-05T20:35:36.399329766Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:75d0129543712bbcc085162b673b18e317596cd72591b938f561e8633fb7feb6,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-789000,Uid:d150fe239f3ab0d40ea6589f44553acb,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1733430936846080872,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver
-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d150fe239f3ab0d40ea6589f44553acb,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.200:8443,kubernetes.io/config.hash: d150fe239f3ab0d40ea6589f44553acb,kubernetes.io/config.seen: 2024-12-05T20:35:36.399328414Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dfeb72c01827e0b29ce2360805da81526b7e12237ef9be36d577b1d74e30bae5,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-789000,Uid:f68d1f88de87c2553b2b0d9b84e5dd72,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733430936824603756,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f68d1f88de87c2553b2b0d9b84e5dd72,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.3
9.200:2379,kubernetes.io/config.hash: f68d1f88de87c2553b2b0d9b84e5dd72,kubernetes.io/config.seen: 2024-12-05T20:35:36.399323702Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:38d7aa2c1d75ef87b678906920ead810edfd47f8bd957bf7d0a1d4073314f23d,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-789000,Uid:d150fe239f3ab0d40ea6589f44553acb,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1733430652033535155,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d150fe239f3ab0d40ea6589f44553acb,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.200:8443,kubernetes.io/config.hash: d150fe239f3ab0d40ea6589f44553acb,kubernetes.io/config.seen: 2024-12-05T20:30:51.501136511Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-coll
ector/interceptors.go:74" id=37ef0e39-390f-4a92-a29b-a214de34f5d1 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 05 20:45:00 embed-certs-789000 crio[718]: time="2024-12-05 20:45:00.528688434Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=153a85a0-7b85-490e-938a-5b5387a805f1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:45:00 embed-certs-789000 crio[718]: time="2024-12-05 20:45:00.528786421Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=153a85a0-7b85-490e-938a-5b5387a805f1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:45:00 embed-certs-789000 crio[718]: time="2024-12-05 20:45:00.529069976Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3590f9508a3b09c552a77ad99852b72a135a2ec395476bf71cac9cba129609b,PodSandboxId:667ddfeba1da3a7fe58f2d2a2b29adf71c9ee53253c97c12c0005a1e9578c25e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430949452646939,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2808c8da-8904-45a0-ae68-bfd68681540f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b462b77c174cac4d84d74fc00ce85e02aaef3f18d3808b44546bf941ac0cb1c,PodSandboxId:a26a5b0e3d29568b7bd6f0f497008d92d2a12fa6ab1f1ce3b8dbc9cc5136cff2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430949030934095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rh6pj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bdd8a47-abec-4dc4-a1ed-4a9a124417a3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0b5402930c1ec85b6c1e1b26d2c9ae3690ece4afc292821db305a7153157e1,PodSandboxId:a3bd07f6e1d967b7e5db80edef61810dce74c7d2527a4a8317756c3408391e50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430948836480376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6mp2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
1aaefd9-c549-4065-b3dd-a0e4d925e592,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48332dd170fb8de47499e97df295d87bd84b9b1168d8de60ad34389930087b21,PodSandboxId:00e986108a7e8293ce3923847989281cc8c71c7385847293b744a5058aa9f6ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733430947399651285,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-znjpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3df1a22-d7e0-4a83-84dd-0e710185ded6,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c198cf9812acaa3935b068fb6be235089141a68ab9f7163d841c3efd8f50de,PodSandboxId:54120b1ea76b62e9483d05b8886c083497d48bc5ed72f841331de381baf99b68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430937122232254
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af9a21bfb03bc31f2f91411d7d8bd82,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb5ec3a2c4a30d4f224867a4150377f1dbee0b64a84ec5b60995cbad230dbd0,PodSandboxId:2d504d8e3573c07d52511c9893b2a60eb84d81276fb9d93a890b85d5d772c271,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430937117
836470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d9c4239ce8abe6c3eb5781fffc7f358,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab61f60cbe2c60f64877fe93a63f225ae437aa73fa09c4e1c45066805ed0c55,PodSandboxId:dfeb72c01827e0b29ce2360805da81526b7e12237ef9be36d577b1d74e30bae5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430937097155965,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f68d1f88de87c2553b2b0d9b84e5dd72,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:217f0ccb4526a1479f5a4dab73685aef7aba41064c97a4b142e0b617c510b39c,PodSandboxId:75d0129543712bbcc085162b673b18e317596cd72591b938f561e8633fb7feb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430937001245655,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d150fe239f3ab0d40ea6589f44553acb,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c6cf0dd68ac2fdf5f39b36f0c8463645f13569af5dfd13d8db86ce45446171a,PodSandboxId:38d7aa2c1d75ef87b678906920ead810edfd47f8bd957bf7d0a1d4073314f23d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733430652292743904,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d150fe239f3ab0d40ea6589f44553acb,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=153a85a0-7b85-490e-938a-5b5387a805f1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:45:00 embed-certs-789000 crio[718]: time="2024-12-05 20:45:00.543711106Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2a48ccdc-0b61-4cd9-8c69-4d2571d06ed1 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:45:00 embed-certs-789000 crio[718]: time="2024-12-05 20:45:00.543807055Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2a48ccdc-0b61-4cd9-8c69-4d2571d06ed1 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:45:00 embed-certs-789000 crio[718]: time="2024-12-05 20:45:00.545104462Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f518782a-d850-41cb-a865-d22428fd036a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:45:00 embed-certs-789000 crio[718]: time="2024-12-05 20:45:00.545900908Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431500545869594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f518782a-d850-41cb-a865-d22428fd036a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:45:00 embed-certs-789000 crio[718]: time="2024-12-05 20:45:00.546812790Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=844eba06-8e2f-4bb1-b730-16f5b1a94383 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:45:00 embed-certs-789000 crio[718]: time="2024-12-05 20:45:00.546883430Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=844eba06-8e2f-4bb1-b730-16f5b1a94383 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:45:00 embed-certs-789000 crio[718]: time="2024-12-05 20:45:00.547150801Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3590f9508a3b09c552a77ad99852b72a135a2ec395476bf71cac9cba129609b,PodSandboxId:667ddfeba1da3a7fe58f2d2a2b29adf71c9ee53253c97c12c0005a1e9578c25e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430949452646939,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2808c8da-8904-45a0-ae68-bfd68681540f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b462b77c174cac4d84d74fc00ce85e02aaef3f18d3808b44546bf941ac0cb1c,PodSandboxId:a26a5b0e3d29568b7bd6f0f497008d92d2a12fa6ab1f1ce3b8dbc9cc5136cff2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430949030934095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rh6pj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bdd8a47-abec-4dc4-a1ed-4a9a124417a3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0b5402930c1ec85b6c1e1b26d2c9ae3690ece4afc292821db305a7153157e1,PodSandboxId:a3bd07f6e1d967b7e5db80edef61810dce74c7d2527a4a8317756c3408391e50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430948836480376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6mp2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
1aaefd9-c549-4065-b3dd-a0e4d925e592,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48332dd170fb8de47499e97df295d87bd84b9b1168d8de60ad34389930087b21,PodSandboxId:00e986108a7e8293ce3923847989281cc8c71c7385847293b744a5058aa9f6ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733430947399651285,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-znjpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3df1a22-d7e0-4a83-84dd-0e710185ded6,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c198cf9812acaa3935b068fb6be235089141a68ab9f7163d841c3efd8f50de,PodSandboxId:54120b1ea76b62e9483d05b8886c083497d48bc5ed72f841331de381baf99b68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430937122232254
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af9a21bfb03bc31f2f91411d7d8bd82,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb5ec3a2c4a30d4f224867a4150377f1dbee0b64a84ec5b60995cbad230dbd0,PodSandboxId:2d504d8e3573c07d52511c9893b2a60eb84d81276fb9d93a890b85d5d772c271,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430937117
836470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d9c4239ce8abe6c3eb5781fffc7f358,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab61f60cbe2c60f64877fe93a63f225ae437aa73fa09c4e1c45066805ed0c55,PodSandboxId:dfeb72c01827e0b29ce2360805da81526b7e12237ef9be36d577b1d74e30bae5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430937097155965,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f68d1f88de87c2553b2b0d9b84e5dd72,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:217f0ccb4526a1479f5a4dab73685aef7aba41064c97a4b142e0b617c510b39c,PodSandboxId:75d0129543712bbcc085162b673b18e317596cd72591b938f561e8633fb7feb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430937001245655,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d150fe239f3ab0d40ea6589f44553acb,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c6cf0dd68ac2fdf5f39b36f0c8463645f13569af5dfd13d8db86ce45446171a,PodSandboxId:38d7aa2c1d75ef87b678906920ead810edfd47f8bd957bf7d0a1d4073314f23d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733430652292743904,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d150fe239f3ab0d40ea6589f44553acb,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=844eba06-8e2f-4bb1-b730-16f5b1a94383 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b3590f9508a3b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   667ddfeba1da3       storage-provisioner
	0b462b77c174c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   a26a5b0e3d295       coredns-7c65d6cfc9-rh6pj
	be0b5402930c1       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   a3bd07f6e1d96       coredns-7c65d6cfc9-6mp2h
	48332dd170fb8       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   9 minutes ago       Running             kube-proxy                0                   00e986108a7e8       kube-proxy-znjpk
	f8c198cf9812a       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   9 minutes ago       Running             kube-controller-manager   2                   54120b1ea76b6       kube-controller-manager-embed-certs-789000
	6eb5ec3a2c4a3       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   9 minutes ago       Running             kube-scheduler            2                   2d504d8e3573c       kube-scheduler-embed-certs-789000
	2ab61f60cbe2c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   dfeb72c01827e       etcd-embed-certs-789000
	217f0ccb4526a       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   9 minutes ago       Running             kube-apiserver            2                   75d0129543712       kube-apiserver-embed-certs-789000
	3c6cf0dd68ac2       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   14 minutes ago      Exited              kube-apiserver            1                   38d7aa2c1d75e       kube-apiserver-embed-certs-789000
	
	
	==> coredns [0b462b77c174cac4d84d74fc00ce85e02aaef3f18d3808b44546bf941ac0cb1c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [be0b5402930c1ec85b6c1e1b26d2c9ae3690ece4afc292821db305a7153157e1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-789000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-789000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=embed-certs-789000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T20_35_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 20:35:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-789000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 20:44:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 20:40:59 +0000   Thu, 05 Dec 2024 20:35:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 20:40:59 +0000   Thu, 05 Dec 2024 20:35:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 20:40:59 +0000   Thu, 05 Dec 2024 20:35:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 20:40:59 +0000   Thu, 05 Dec 2024 20:35:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.200
	  Hostname:    embed-certs-789000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 14bd481cd3474e2db5e4383ceddf4f11
	  System UUID:                14bd481c-d347-4e2d-b5e4-383ceddf4f11
	  Boot ID:                    8a1a0da2-2faa-4c95-9a90-12d042e0f521
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-6mp2h                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m13s
	  kube-system                 coredns-7c65d6cfc9-rh6pj                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m13s
	  kube-system                 etcd-embed-certs-789000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m19s
	  kube-system                 kube-apiserver-embed-certs-789000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 kube-controller-manager-embed-certs-789000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 kube-proxy-znjpk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 kube-scheduler-embed-certs-789000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 metrics-server-6867b74b74-cs42k               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m12s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m12s                  kube-proxy       
	  Normal  Starting                 9m24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m24s (x8 over 9m24s)  kubelet          Node embed-certs-789000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m24s (x8 over 9m24s)  kubelet          Node embed-certs-789000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m24s (x7 over 9m24s)  kubelet          Node embed-certs-789000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m18s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m18s                  kubelet          Node embed-certs-789000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s                  kubelet          Node embed-certs-789000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s                  kubelet          Node embed-certs-789000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m14s                  node-controller  Node embed-certs-789000 event: Registered Node embed-certs-789000 in Controller
	
	
	==> dmesg <==
	[  +0.052501] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041960] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.965205] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.772504] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.648694] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.286428] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.057035] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075050] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.179907] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.169729] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.317023] systemd-fstab-generator[708]: Ignoring "noauto" option for root device
	[  +4.500262] systemd-fstab-generator[798]: Ignoring "noauto" option for root device
	[  +0.067306] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.101300] systemd-fstab-generator[918]: Ignoring "noauto" option for root device
	[  +4.578459] kauditd_printk_skb: 97 callbacks suppressed
	[Dec 5 20:31] kauditd_printk_skb: 85 callbacks suppressed
	[Dec 5 20:35] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.706956] systemd-fstab-generator[2617]: Ignoring "noauto" option for root device
	[  +4.576632] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.484793] systemd-fstab-generator[2940]: Ignoring "noauto" option for root device
	[  +5.362620] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.098938] systemd-fstab-generator[3091]: Ignoring "noauto" option for root device
	[  +4.974915] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [2ab61f60cbe2c60f64877fe93a63f225ae437aa73fa09c4e1c45066805ed0c55] <==
	{"level":"info","ts":"2024-12-05T20:35:37.481928Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-05T20:35:37.482158Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"fe8c4457455e3a5","initial-advertise-peer-urls":["https://192.168.39.200:2380"],"listen-peer-urls":["https://192.168.39.200:2380"],"advertise-client-urls":["https://192.168.39.200:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.200:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-05T20:35:37.482188Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-05T20:35:37.482307Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.200:2380"}
	{"level":"info","ts":"2024-12-05T20:35:37.482321Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.200:2380"}
	{"level":"info","ts":"2024-12-05T20:35:37.515871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-05T20:35:37.515967Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-05T20:35:37.516001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 received MsgPreVoteResp from fe8c4457455e3a5 at term 1"}
	{"level":"info","ts":"2024-12-05T20:35:37.516031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 became candidate at term 2"}
	{"level":"info","ts":"2024-12-05T20:35:37.516054Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 received MsgVoteResp from fe8c4457455e3a5 at term 2"}
	{"level":"info","ts":"2024-12-05T20:35:37.516090Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe8c4457455e3a5 became leader at term 2"}
	{"level":"info","ts":"2024-12-05T20:35:37.516115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fe8c4457455e3a5 elected leader fe8c4457455e3a5 at term 2"}
	{"level":"info","ts":"2024-12-05T20:35:37.519636Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"fe8c4457455e3a5","local-member-attributes":"{Name:embed-certs-789000 ClientURLs:[https://192.168.39.200:2379]}","request-path":"/0/members/fe8c4457455e3a5/attributes","cluster-id":"1d37198946ef4128","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-05T20:35:37.521424Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T20:35:37.521817Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T20:35:37.523429Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T20:35:37.524472Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-05T20:35:37.524516Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-05T20:35:37.529989Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T20:35:37.530771Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-05T20:35:37.530904Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1d37198946ef4128","local-member-id":"fe8c4457455e3a5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T20:35:37.530995Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T20:35:37.531038Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T20:35:37.540714Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T20:35:37.541479Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.200:2379"}
	
	
	==> kernel <==
	 20:45:00 up 14 min,  0 users,  load average: 0.18, 0.27, 0.19
	Linux embed-certs-789000 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [217f0ccb4526a1479f5a4dab73685aef7aba41064c97a4b142e0b617c510b39c] <==
	W1205 20:40:40.723606       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 20:40:40.723876       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1205 20:40:40.724931       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 20:40:40.724986       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 20:41:40.726183       1 handler_proxy.go:99] no RequestInfo found in the context
	W1205 20:41:40.726569       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 20:41:40.726653       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1205 20:41:40.726673       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1205 20:41:40.727885       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 20:41:40.727950       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 20:43:40.728976       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 20:43:40.729468       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1205 20:43:40.728997       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 20:43:40.729590       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1205 20:43:40.730726       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 20:43:40.730763       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [3c6cf0dd68ac2fdf5f39b36f0c8463645f13569af5dfd13d8db86ce45446171a] <==
	W1205 20:35:32.266632       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.276571       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.329791       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.346490       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.378235       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.508014       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.525936       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.563902       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.596833       1 logging.go:55] [core] [Channel #15 SubChannel #17]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.627285       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.642488       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.647051       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.700895       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.716652       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.716783       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.833345       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.858843       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.888650       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.910121       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.918976       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.975857       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:33.002664       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:33.097869       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:33.195828       1 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:33.293851       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [f8c198cf9812acaa3935b068fb6be235089141a68ab9f7163d841c3efd8f50de] <==
	E1205 20:39:46.728136       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:39:47.163147       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:40:16.734349       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:40:17.172270       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:40:46.743773       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:40:47.180762       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 20:40:59.881801       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-789000"
	E1205 20:41:16.750565       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:41:17.191043       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:41:46.756975       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:41:47.200000       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 20:42:01.356238       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="229.426µs"
	I1205 20:42:13.355586       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="281.128µs"
	E1205 20:42:16.763505       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:42:17.209038       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:42:46.772280       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:42:47.217351       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:43:16.779211       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:43:17.225250       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:43:46.787356       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:43:47.234114       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:44:16.793361       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:44:17.243643       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:44:46.800957       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:44:47.251687       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [48332dd170fb8de47499e97df295d87bd84b9b1168d8de60ad34389930087b21] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 20:35:48.011075       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1205 20:35:48.034284       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.200"]
	E1205 20:35:48.034437       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 20:35:48.165753       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 20:35:48.165799       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 20:35:48.165834       1 server_linux.go:169] "Using iptables Proxier"
	I1205 20:35:48.172474       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 20:35:48.172821       1 server.go:483] "Version info" version="v1.31.2"
	I1205 20:35:48.172851       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:35:48.174526       1 config.go:199] "Starting service config controller"
	I1205 20:35:48.174570       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 20:35:48.174617       1 config.go:105] "Starting endpoint slice config controller"
	I1205 20:35:48.174625       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 20:35:48.179107       1 config.go:328] "Starting node config controller"
	I1205 20:35:48.179230       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 20:35:48.275928       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 20:35:48.275991       1 shared_informer.go:320] Caches are synced for service config
	I1205 20:35:48.283499       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6eb5ec3a2c4a30d4f224867a4150377f1dbee0b64a84ec5b60995cbad230dbd0] <==
	W1205 20:35:39.739825       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 20:35:39.740291       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1205 20:35:40.554854       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 20:35:40.554916       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1205 20:35:40.602749       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 20:35:40.602857       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:35:40.672994       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1205 20:35:40.673032       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:35:40.747640       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 20:35:40.747694       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1205 20:35:40.754126       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 20:35:40.754180       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:35:40.856235       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 20:35:40.856286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:35:40.903042       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 20:35:40.903195       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:35:40.916593       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 20:35:40.916724       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:35:40.962609       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 20:35:40.962644       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:35:40.964705       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 20:35:40.964753       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:35:41.062198       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 20:35:41.062845       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1205 20:35:42.930209       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 05 20:43:47 embed-certs-789000 kubelet[2947]: E1205 20:43:47.339347    2947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cs42k" podUID="98b266c3-8ff0-4dc6-9c43-374dcd7c074a"
	Dec 05 20:43:52 embed-certs-789000 kubelet[2947]: E1205 20:43:52.470573    2947 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431432469902213,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:43:52 embed-certs-789000 kubelet[2947]: E1205 20:43:52.470910    2947 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431432469902213,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:44:00 embed-certs-789000 kubelet[2947]: E1205 20:44:00.339991    2947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cs42k" podUID="98b266c3-8ff0-4dc6-9c43-374dcd7c074a"
	Dec 05 20:44:02 embed-certs-789000 kubelet[2947]: E1205 20:44:02.474233    2947 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431442473592007,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:44:02 embed-certs-789000 kubelet[2947]: E1205 20:44:02.474298    2947 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431442473592007,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:44:12 embed-certs-789000 kubelet[2947]: E1205 20:44:12.475690    2947 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431452475346851,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:44:12 embed-certs-789000 kubelet[2947]: E1205 20:44:12.475715    2947 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431452475346851,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:44:13 embed-certs-789000 kubelet[2947]: E1205 20:44:13.339261    2947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cs42k" podUID="98b266c3-8ff0-4dc6-9c43-374dcd7c074a"
	Dec 05 20:44:22 embed-certs-789000 kubelet[2947]: E1205 20:44:22.477926    2947 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431462477557921,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:44:22 embed-certs-789000 kubelet[2947]: E1205 20:44:22.478354    2947 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431462477557921,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:44:24 embed-certs-789000 kubelet[2947]: E1205 20:44:24.340178    2947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cs42k" podUID="98b266c3-8ff0-4dc6-9c43-374dcd7c074a"
	Dec 05 20:44:32 embed-certs-789000 kubelet[2947]: E1205 20:44:32.482035    2947 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431472481515696,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:44:32 embed-certs-789000 kubelet[2947]: E1205 20:44:32.482336    2947 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431472481515696,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:44:39 embed-certs-789000 kubelet[2947]: E1205 20:44:39.340666    2947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cs42k" podUID="98b266c3-8ff0-4dc6-9c43-374dcd7c074a"
	Dec 05 20:44:42 embed-certs-789000 kubelet[2947]: E1205 20:44:42.376835    2947 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 05 20:44:42 embed-certs-789000 kubelet[2947]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 05 20:44:42 embed-certs-789000 kubelet[2947]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 20:44:42 embed-certs-789000 kubelet[2947]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 20:44:42 embed-certs-789000 kubelet[2947]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 20:44:42 embed-certs-789000 kubelet[2947]: E1205 20:44:42.484327    2947 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431482483894253,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:44:42 embed-certs-789000 kubelet[2947]: E1205 20:44:42.484358    2947 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431482483894253,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:44:50 embed-certs-789000 kubelet[2947]: E1205 20:44:50.340071    2947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cs42k" podUID="98b266c3-8ff0-4dc6-9c43-374dcd7c074a"
	Dec 05 20:44:52 embed-certs-789000 kubelet[2947]: E1205 20:44:52.486817    2947 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431492486452155,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:44:52 embed-certs-789000 kubelet[2947]: E1205 20:44:52.486850    2947 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431492486452155,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [b3590f9508a3b09c552a77ad99852b72a135a2ec395476bf71cac9cba129609b] <==
	I1205 20:35:49.596428       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 20:35:49.626633       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 20:35:49.626819       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 20:35:49.644774       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 20:35:49.645582       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc39230d-60a9-4f43-90b2-51b526f81b18", APIVersion:"v1", ResourceVersion:"437", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-789000_3b797468-49ed-4acf-b247-e4982cdae2fa became leader
	I1205 20:35:49.645631       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-789000_3b797468-49ed-4acf-b247-e4982cdae2fa!
	I1205 20:35:49.746769       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-789000_3b797468-49ed-4acf-b247-e4982cdae2fa!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-789000 -n embed-certs-789000
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-789000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-cs42k
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-789000 describe pod metrics-server-6867b74b74-cs42k
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-789000 describe pod metrics-server-6867b74b74-cs42k: exit status 1 (68.784995ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-cs42k" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-789000 describe pod metrics-server-6867b74b74-cs42k: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-942599 -n default-k8s-diff-port-942599
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-12-05 20:45:42.657245231 +0000 UTC m=+6234.862865568
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-942599 -n default-k8s-diff-port-942599
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-942599 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-942599 logs -n 25: (2.245020898s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-790679 -- sudo                         | cert-options-790679          | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:21 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-790679                                 | cert-options-790679          | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:21 UTC |
	| start   | -p no-preload-816185                                   | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-886958                           | kubernetes-upgrade-886958    | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:21 UTC |
	| start   | -p embed-certs-789000                                  | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-816185             | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-816185                                   | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-789000            | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-789000                                  | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-315387                              | cert-expiration-315387       | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-315387                              | cert-expiration-315387       | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	| delete  | -p                                                     | disable-driver-mounts-242147 | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	|         | disable-driver-mounts-242147                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:25 UTC |
	|         | default-k8s-diff-port-942599                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-386085        | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-942599  | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC | 05 Dec 24 20:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC |                     |
	|         | default-k8s-diff-port-942599                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-816185                  | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-789000                 | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-816185                                   | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC | 05 Dec 24 20:37 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-789000                                  | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC | 05 Dec 24 20:35 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-386085                              | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-386085             | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-386085                              | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-942599       | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:28 UTC | 05 Dec 24 20:36 UTC |
	|         | default-k8s-diff-port-942599                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 20:28:03
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:28:03.038037  585929 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:28:03.038168  585929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:28:03.038178  585929 out.go:358] Setting ErrFile to fd 2...
	I1205 20:28:03.038185  585929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:28:03.038375  585929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 20:28:03.038955  585929 out.go:352] Setting JSON to false
	I1205 20:28:03.039948  585929 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":11429,"bootTime":1733419054,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:28:03.040015  585929 start.go:139] virtualization: kvm guest
	I1205 20:28:03.042326  585929 out.go:177] * [default-k8s-diff-port-942599] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:28:03.044291  585929 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 20:28:03.044320  585929 notify.go:220] Checking for updates...
	I1205 20:28:03.047072  585929 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:28:03.048480  585929 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:28:03.049796  585929 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 20:28:03.051035  585929 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:28:03.052263  585929 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:28:03.054167  585929 config.go:182] Loaded profile config "default-k8s-diff-port-942599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:28:03.054665  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:28:03.054749  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:28:03.070361  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33501
	I1205 20:28:03.070891  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:28:03.071534  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:28:03.071563  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:28:03.071995  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:28:03.072285  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:28:03.072587  585929 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:28:03.072920  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:28:03.072968  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:28:03.088186  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38669
	I1205 20:28:03.088660  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:28:03.089202  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:28:03.089224  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:28:03.089542  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:28:03.089782  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:28:03.122562  585929 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 20:28:03.123970  585929 start.go:297] selected driver: kvm2
	I1205 20:28:03.123992  585929 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-942599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-942599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.96 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:28:03.124128  585929 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:28:03.125014  585929 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:28:03.125111  585929 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20052-530897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:28:03.140461  585929 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 20:28:03.140904  585929 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:28:03.140943  585929 cni.go:84] Creating CNI manager for ""
	I1205 20:28:03.141015  585929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:28:03.141067  585929 start.go:340] cluster config:
	{Name:default-k8s-diff-port-942599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-942599 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.96 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:28:03.141179  585929 iso.go:125] acquiring lock: {Name:mk778929df466edaca8cb6d38427acedfae32b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:28:03.144215  585929 out.go:177] * Starting "default-k8s-diff-port-942599" primary control-plane node in "default-k8s-diff-port-942599" cluster
	I1205 20:28:03.276565  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:03.145620  585929 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:28:03.145661  585929 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 20:28:03.145676  585929 cache.go:56] Caching tarball of preloaded images
	I1205 20:28:03.145844  585929 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:28:03.145864  585929 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 20:28:03.146005  585929 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/config.json ...
	I1205 20:28:03.146240  585929 start.go:360] acquireMachinesLock for default-k8s-diff-port-942599: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:28:06.348547  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:12.428620  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:15.500614  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:21.580587  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:24.652618  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:30.732598  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:33.804612  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:39.884624  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:42.956577  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:49.036617  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:52.108607  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:58.188605  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:01.260573  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:07.340591  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:10.412578  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:16.492574  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:19.564578  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:25.644591  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:28.716619  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:34.796609  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:37.868605  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:43.948594  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:47.020553  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:53.100499  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:56.172560  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:02.252612  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:05.324648  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:11.404563  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:14.476553  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:20.556568  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:23.561620  585113 start.go:364] duration metric: took 4m32.790399884s to acquireMachinesLock for "embed-certs-789000"
	I1205 20:30:23.561696  585113 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:30:23.561711  585113 fix.go:54] fixHost starting: 
	I1205 20:30:23.562327  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:30:23.562400  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:30:23.578260  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38555
	I1205 20:30:23.578843  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:30:23.579379  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:30:23.579405  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:30:23.579776  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:30:23.580051  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:23.580222  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetState
	I1205 20:30:23.582161  585113 fix.go:112] recreateIfNeeded on embed-certs-789000: state=Stopped err=<nil>
	I1205 20:30:23.582190  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	W1205 20:30:23.582386  585113 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 20:30:23.584585  585113 out.go:177] * Restarting existing kvm2 VM for "embed-certs-789000" ...
	I1205 20:30:23.586583  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Start
	I1205 20:30:23.586835  585113 main.go:141] libmachine: (embed-certs-789000) Ensuring networks are active...
	I1205 20:30:23.587628  585113 main.go:141] libmachine: (embed-certs-789000) Ensuring network default is active
	I1205 20:30:23.587937  585113 main.go:141] libmachine: (embed-certs-789000) Ensuring network mk-embed-certs-789000 is active
	I1205 20:30:23.588228  585113 main.go:141] libmachine: (embed-certs-789000) Getting domain xml...
	I1205 20:30:23.588898  585113 main.go:141] libmachine: (embed-certs-789000) Creating domain...
	I1205 20:30:24.829936  585113 main.go:141] libmachine: (embed-certs-789000) Waiting to get IP...
	I1205 20:30:24.830897  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:24.831398  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:24.831465  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:24.831364  586433 retry.go:31] will retry after 208.795355ms: waiting for machine to come up
	I1205 20:30:25.042078  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:25.042657  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:25.042689  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:25.042599  586433 retry.go:31] will retry after 385.313968ms: waiting for machine to come up
	I1205 20:30:25.429439  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:25.429877  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:25.429913  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:25.429811  586433 retry.go:31] will retry after 432.591358ms: waiting for machine to come up
	I1205 20:30:23.558453  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:30:23.558508  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetMachineName
	I1205 20:30:23.558905  585025 buildroot.go:166] provisioning hostname "no-preload-816185"
	I1205 20:30:23.558943  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetMachineName
	I1205 20:30:23.559166  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:30:23.561471  585025 machine.go:96] duration metric: took 4m37.380964872s to provisionDockerMachine
	I1205 20:30:23.561518  585025 fix.go:56] duration metric: took 4m37.403172024s for fixHost
	I1205 20:30:23.561524  585025 start.go:83] releasing machines lock for "no-preload-816185", held for 4m37.40319095s
	W1205 20:30:23.561546  585025 start.go:714] error starting host: provision: host is not running
	W1205 20:30:23.561677  585025 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1205 20:30:23.561688  585025 start.go:729] Will try again in 5 seconds ...
	I1205 20:30:25.864656  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:25.865217  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:25.865255  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:25.865138  586433 retry.go:31] will retry after 571.148349ms: waiting for machine to come up
	I1205 20:30:26.437644  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:26.438220  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:26.438250  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:26.438165  586433 retry.go:31] will retry after 585.234455ms: waiting for machine to come up
	I1205 20:30:27.025107  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:27.025510  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:27.025538  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:27.025459  586433 retry.go:31] will retry after 648.291531ms: waiting for machine to come up
	I1205 20:30:27.675457  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:27.675898  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:27.675928  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:27.675838  586433 retry.go:31] will retry after 804.071148ms: waiting for machine to come up
	I1205 20:30:28.481966  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:28.482386  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:28.482416  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:28.482329  586433 retry.go:31] will retry after 905.207403ms: waiting for machine to come up
	I1205 20:30:29.388933  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:29.389546  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:29.389571  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:29.389484  586433 retry.go:31] will retry after 1.48894232s: waiting for machine to come up
	I1205 20:30:28.562678  585025 start.go:360] acquireMachinesLock for no-preload-816185: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:30:30.880218  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:30.880742  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:30.880773  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:30.880685  586433 retry.go:31] will retry after 2.314200549s: waiting for machine to come up
	I1205 20:30:33.198477  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:33.198998  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:33.199029  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:33.198945  586433 retry.go:31] will retry after 1.922541264s: waiting for machine to come up
	I1205 20:30:35.123922  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:35.124579  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:35.124607  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:35.124524  586433 retry.go:31] will retry after 3.537087912s: waiting for machine to come up
	I1205 20:30:38.662839  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:38.663212  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:38.663250  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:38.663160  586433 retry.go:31] will retry after 3.371938424s: waiting for machine to come up
	I1205 20:30:43.457332  585602 start.go:364] duration metric: took 3m31.488905557s to acquireMachinesLock for "old-k8s-version-386085"
	I1205 20:30:43.457418  585602 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:30:43.457427  585602 fix.go:54] fixHost starting: 
	I1205 20:30:43.457835  585602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:30:43.457891  585602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:30:43.474845  585602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33571
	I1205 20:30:43.475386  585602 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:30:43.475993  585602 main.go:141] libmachine: Using API Version  1
	I1205 20:30:43.476026  585602 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:30:43.476404  585602 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:30:43.476613  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:30:43.476778  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetState
	I1205 20:30:43.478300  585602 fix.go:112] recreateIfNeeded on old-k8s-version-386085: state=Stopped err=<nil>
	I1205 20:30:43.478329  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	W1205 20:30:43.478502  585602 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 20:30:43.480644  585602 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-386085" ...
	I1205 20:30:42.038738  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.039204  585113 main.go:141] libmachine: (embed-certs-789000) Found IP for machine: 192.168.39.200
	I1205 20:30:42.039235  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has current primary IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.039244  585113 main.go:141] libmachine: (embed-certs-789000) Reserving static IP address...
	I1205 20:30:42.039760  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "embed-certs-789000", mac: "52:54:00:48:ae:b2", ip: "192.168.39.200"} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.039806  585113 main.go:141] libmachine: (embed-certs-789000) DBG | skip adding static IP to network mk-embed-certs-789000 - found existing host DHCP lease matching {name: "embed-certs-789000", mac: "52:54:00:48:ae:b2", ip: "192.168.39.200"}
	I1205 20:30:42.039819  585113 main.go:141] libmachine: (embed-certs-789000) Reserved static IP address: 192.168.39.200
	I1205 20:30:42.039835  585113 main.go:141] libmachine: (embed-certs-789000) Waiting for SSH to be available...
	I1205 20:30:42.039843  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Getting to WaitForSSH function...
	I1205 20:30:42.042013  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.042352  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.042386  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.042542  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Using SSH client type: external
	I1205 20:30:42.042562  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa (-rw-------)
	I1205 20:30:42.042586  585113 main.go:141] libmachine: (embed-certs-789000) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.200 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:30:42.042595  585113 main.go:141] libmachine: (embed-certs-789000) DBG | About to run SSH command:
	I1205 20:30:42.042603  585113 main.go:141] libmachine: (embed-certs-789000) DBG | exit 0
	I1205 20:30:42.168573  585113 main.go:141] libmachine: (embed-certs-789000) DBG | SSH cmd err, output: <nil>: 
	I1205 20:30:42.168960  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetConfigRaw
	I1205 20:30:42.169783  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetIP
	I1205 20:30:42.172396  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.172790  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.172818  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.173023  585113 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/config.json ...
	I1205 20:30:42.173214  585113 machine.go:93] provisionDockerMachine start ...
	I1205 20:30:42.173234  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:42.173465  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.175399  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.175754  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.175785  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.175885  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:42.176063  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.176208  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.176412  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:42.176583  585113 main.go:141] libmachine: Using SSH client type: native
	I1205 20:30:42.176816  585113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1205 20:30:42.176830  585113 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:30:42.280829  585113 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 20:30:42.280861  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetMachineName
	I1205 20:30:42.281135  585113 buildroot.go:166] provisioning hostname "embed-certs-789000"
	I1205 20:30:42.281168  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetMachineName
	I1205 20:30:42.281409  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.284355  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.284692  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.284723  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.284817  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:42.285019  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.285185  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.285338  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:42.285511  585113 main.go:141] libmachine: Using SSH client type: native
	I1205 20:30:42.285716  585113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1205 20:30:42.285730  585113 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-789000 && echo "embed-certs-789000" | sudo tee /etc/hostname
	I1205 20:30:42.409310  585113 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-789000
	
	I1205 20:30:42.409370  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.412182  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.412524  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.412566  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.412779  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:42.412989  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.413137  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.413278  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:42.413468  585113 main.go:141] libmachine: Using SSH client type: native
	I1205 20:30:42.413674  585113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1205 20:30:42.413690  585113 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-789000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-789000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-789000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:30:42.529773  585113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:30:42.529806  585113 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:30:42.529829  585113 buildroot.go:174] setting up certificates
	I1205 20:30:42.529841  585113 provision.go:84] configureAuth start
	I1205 20:30:42.529850  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetMachineName
	I1205 20:30:42.530201  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetIP
	I1205 20:30:42.533115  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.533527  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.533558  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.533753  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.535921  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.536310  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.536339  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.536518  585113 provision.go:143] copyHostCerts
	I1205 20:30:42.536610  585113 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:30:42.536631  585113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:30:42.536698  585113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:30:42.536793  585113 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:30:42.536802  585113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:30:42.536826  585113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:30:42.536880  585113 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:30:42.536887  585113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:30:42.536908  585113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:30:42.536956  585113 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.embed-certs-789000 san=[127.0.0.1 192.168.39.200 embed-certs-789000 localhost minikube]
	I1205 20:30:42.832543  585113 provision.go:177] copyRemoteCerts
	I1205 20:30:42.832610  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:30:42.832640  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.835403  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.835669  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.835701  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.835848  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:42.836027  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.836161  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:42.836314  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:30:42.918661  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:30:42.943903  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1205 20:30:42.968233  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:30:42.993174  585113 provision.go:87] duration metric: took 463.317149ms to configureAuth
	I1205 20:30:42.993249  585113 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:30:42.993449  585113 config.go:182] Loaded profile config "embed-certs-789000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:30:42.993554  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.996211  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.996637  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.996696  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.996841  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:42.997049  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.997196  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.997305  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:42.997458  585113 main.go:141] libmachine: Using SSH client type: native
	I1205 20:30:42.997641  585113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1205 20:30:42.997656  585113 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:30:43.220096  585113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:30:43.220127  585113 machine.go:96] duration metric: took 1.046899757s to provisionDockerMachine
	I1205 20:30:43.220141  585113 start.go:293] postStartSetup for "embed-certs-789000" (driver="kvm2")
	I1205 20:30:43.220152  585113 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:30:43.220176  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:43.220544  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:30:43.220584  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:43.223481  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.223860  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:43.223889  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.224102  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:43.224316  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:43.224483  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:43.224667  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:30:43.307878  585113 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:30:43.312875  585113 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:30:43.312905  585113 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:30:43.312981  585113 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:30:43.313058  585113 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:30:43.313169  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:30:43.323221  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:30:43.347978  585113 start.go:296] duration metric: took 127.819083ms for postStartSetup
	I1205 20:30:43.348023  585113 fix.go:56] duration metric: took 19.786318897s for fixHost
	I1205 20:30:43.348046  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:43.350639  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.351004  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:43.351026  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.351247  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:43.351478  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:43.351642  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:43.351803  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:43.351950  585113 main.go:141] libmachine: Using SSH client type: native
	I1205 20:30:43.352122  585113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1205 20:30:43.352133  585113 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:30:43.457130  585113 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430643.415370749
	
	I1205 20:30:43.457164  585113 fix.go:216] guest clock: 1733430643.415370749
	I1205 20:30:43.457176  585113 fix.go:229] Guest: 2024-12-05 20:30:43.415370749 +0000 UTC Remote: 2024-12-05 20:30:43.34802793 +0000 UTC m=+292.733798952 (delta=67.342819ms)
	I1205 20:30:43.457209  585113 fix.go:200] guest clock delta is within tolerance: 67.342819ms
	I1205 20:30:43.457217  585113 start.go:83] releasing machines lock for "embed-certs-789000", held for 19.895543311s
	I1205 20:30:43.457251  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:43.457563  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetIP
	I1205 20:30:43.460628  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.461002  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:43.461042  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.461175  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:43.461758  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:43.461937  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:43.462067  585113 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:30:43.462120  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:43.462147  585113 ssh_runner.go:195] Run: cat /version.json
	I1205 20:30:43.462169  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:43.464859  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.465147  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.465237  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:43.465264  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.465409  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:43.465472  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:43.465497  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.465589  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:43.465711  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:43.465768  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:43.465863  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:43.465907  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:30:43.466006  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:43.466129  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:30:43.568909  585113 ssh_runner.go:195] Run: systemctl --version
	I1205 20:30:43.575175  585113 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:30:43.725214  585113 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:30:43.732226  585113 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:30:43.732369  585113 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:30:43.750186  585113 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:30:43.750223  585113 start.go:495] detecting cgroup driver to use...
	I1205 20:30:43.750296  585113 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:30:43.767876  585113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:30:43.783386  585113 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:30:43.783465  585113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:30:43.799917  585113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:30:43.815607  585113 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:30:43.935150  585113 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:30:44.094292  585113 docker.go:233] disabling docker service ...
	I1205 20:30:44.094378  585113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:30:44.111307  585113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:30:44.127528  585113 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:30:44.284496  585113 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:30:44.422961  585113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:30:44.439104  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:30:44.461721  585113 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:30:44.461787  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.476398  585113 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:30:44.476463  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.489821  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.502250  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.514245  585113 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:30:44.528227  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.540205  585113 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.559447  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.571434  585113 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:30:44.583635  585113 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:30:44.583717  585113 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:30:44.600954  585113 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:30:44.613381  585113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:30:44.733592  585113 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:30:44.843948  585113 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:30:44.844036  585113 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:30:44.849215  585113 start.go:563] Will wait 60s for crictl version
	I1205 20:30:44.849275  585113 ssh_runner.go:195] Run: which crictl
	I1205 20:30:44.853481  585113 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:30:44.900488  585113 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:30:44.900583  585113 ssh_runner.go:195] Run: crio --version
	I1205 20:30:44.944771  585113 ssh_runner.go:195] Run: crio --version
	I1205 20:30:44.977119  585113 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:30:44.978527  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetIP
	I1205 20:30:44.981609  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:44.982001  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:44.982037  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:44.982240  585113 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:30:44.986979  585113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:30:45.001779  585113 kubeadm.go:883] updating cluster {Name:embed-certs-789000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-789000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:30:45.001935  585113 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:30:45.002021  585113 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:30:45.041827  585113 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 20:30:45.041918  585113 ssh_runner.go:195] Run: which lz4
	I1205 20:30:45.046336  585113 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 20:30:45.050804  585113 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:30:45.050852  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 20:30:43.482307  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .Start
	I1205 20:30:43.482501  585602 main.go:141] libmachine: (old-k8s-version-386085) Ensuring networks are active...
	I1205 20:30:43.483222  585602 main.go:141] libmachine: (old-k8s-version-386085) Ensuring network default is active
	I1205 20:30:43.483574  585602 main.go:141] libmachine: (old-k8s-version-386085) Ensuring network mk-old-k8s-version-386085 is active
	I1205 20:30:43.484156  585602 main.go:141] libmachine: (old-k8s-version-386085) Getting domain xml...
	I1205 20:30:43.485045  585602 main.go:141] libmachine: (old-k8s-version-386085) Creating domain...
	I1205 20:30:44.770817  585602 main.go:141] libmachine: (old-k8s-version-386085) Waiting to get IP...
	I1205 20:30:44.772079  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:44.772538  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:44.772599  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:44.772517  586577 retry.go:31] will retry after 247.056435ms: waiting for machine to come up
	I1205 20:30:45.021096  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:45.021642  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:45.021678  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:45.021560  586577 retry.go:31] will retry after 241.543543ms: waiting for machine to come up
	I1205 20:30:45.265136  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:45.265654  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:45.265683  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:45.265596  586577 retry.go:31] will retry after 324.624293ms: waiting for machine to come up
	I1205 20:30:45.592067  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:45.592603  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:45.592636  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:45.592558  586577 retry.go:31] will retry after 408.275958ms: waiting for machine to come up
	I1205 20:30:46.002321  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:46.002872  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:46.002904  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:46.002808  586577 retry.go:31] will retry after 693.356488ms: waiting for machine to come up
	I1205 20:30:46.697505  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:46.697874  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:46.697900  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:46.697846  586577 retry.go:31] will retry after 906.807324ms: waiting for machine to come up
	I1205 20:30:46.612504  585113 crio.go:462] duration metric: took 1.56620974s to copy over tarball
	I1205 20:30:46.612585  585113 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:30:48.868826  585113 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256202653s)
	I1205 20:30:48.868863  585113 crio.go:469] duration metric: took 2.256329112s to extract the tarball
	I1205 20:30:48.868873  585113 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:30:48.906872  585113 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:30:48.955442  585113 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:30:48.955468  585113 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:30:48.955477  585113 kubeadm.go:934] updating node { 192.168.39.200 8443 v1.31.2 crio true true} ...
	I1205 20:30:48.955603  585113 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-789000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-789000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:30:48.955668  585113 ssh_runner.go:195] Run: crio config
	I1205 20:30:49.007389  585113 cni.go:84] Creating CNI manager for ""
	I1205 20:30:49.007419  585113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:30:49.007433  585113 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:30:49.007473  585113 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.200 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-789000 NodeName:embed-certs-789000 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:30:49.007656  585113 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-789000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.200"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.200"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:30:49.007734  585113 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:30:49.021862  585113 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:30:49.021949  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:30:49.032937  585113 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1205 20:30:49.053311  585113 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:30:49.073636  585113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1205 20:30:49.094437  585113 ssh_runner.go:195] Run: grep 192.168.39.200	control-plane.minikube.internal$ /etc/hosts
	I1205 20:30:49.098470  585113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.200	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:30:49.112013  585113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:30:49.246312  585113 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:30:49.264250  585113 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000 for IP: 192.168.39.200
	I1205 20:30:49.264301  585113 certs.go:194] generating shared ca certs ...
	I1205 20:30:49.264329  585113 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:30:49.264565  585113 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:30:49.264627  585113 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:30:49.264641  585113 certs.go:256] generating profile certs ...
	I1205 20:30:49.264775  585113 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/client.key
	I1205 20:30:49.264854  585113 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/apiserver.key.5c723d79
	I1205 20:30:49.264894  585113 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/proxy-client.key
	I1205 20:30:49.265026  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:30:49.265094  585113 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:30:49.265109  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:30:49.265144  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:30:49.265179  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:30:49.265215  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:30:49.265258  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:30:49.266137  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:30:49.297886  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:30:49.339461  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:30:49.385855  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:30:49.427676  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1205 20:30:49.466359  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:30:49.492535  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:30:49.518311  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:30:49.543545  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:30:49.567956  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:30:49.592361  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:30:49.616245  585113 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:30:49.633947  585113 ssh_runner.go:195] Run: openssl version
	I1205 20:30:49.640353  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:30:49.652467  585113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:30:49.657353  585113 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:30:49.657440  585113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:30:49.664045  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:30:49.679941  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:30:49.695153  585113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:30:49.700397  585113 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:30:49.700458  585113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:30:49.706786  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:30:49.718994  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:30:49.731470  585113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:30:49.736654  585113 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:30:49.736725  585113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:30:49.743034  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:30:49.755334  585113 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:30:49.760378  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:30:49.766942  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:30:49.773911  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:30:49.780556  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:30:49.787004  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:30:49.793473  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:30:49.800009  585113 kubeadm.go:392] StartCluster: {Name:embed-certs-789000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-789000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:30:49.800118  585113 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:30:49.800163  585113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:30:49.844520  585113 cri.go:89] found id: ""
	I1205 20:30:49.844620  585113 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:30:49.857604  585113 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 20:30:49.857640  585113 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 20:30:49.857702  585113 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:30:49.870235  585113 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:30:49.871318  585113 kubeconfig.go:125] found "embed-certs-789000" server: "https://192.168.39.200:8443"
	I1205 20:30:49.873416  585113 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:30:49.884281  585113 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.200
	I1205 20:30:49.884331  585113 kubeadm.go:1160] stopping kube-system containers ...
	I1205 20:30:49.884348  585113 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:30:49.884410  585113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:30:49.930238  585113 cri.go:89] found id: ""
	I1205 20:30:49.930351  585113 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:30:49.947762  585113 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:30:49.957878  585113 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:30:49.957902  585113 kubeadm.go:157] found existing configuration files:
	
	I1205 20:30:49.957960  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:30:49.967261  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:30:49.967342  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:30:49.977868  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:30:49.987715  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:30:49.987777  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:30:49.998157  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:30:50.008224  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:30:50.008334  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:30:50.018748  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:30:50.028204  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:30:50.028287  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:30:50.038459  585113 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:30:50.049458  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:50.175199  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:47.606601  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:47.607065  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:47.607098  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:47.607001  586577 retry.go:31] will retry after 1.007867893s: waiting for machine to come up
	I1205 20:30:48.617140  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:48.617641  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:48.617674  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:48.617608  586577 retry.go:31] will retry after 1.15317606s: waiting for machine to come up
	I1205 20:30:49.773126  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:49.773670  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:49.773699  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:49.773620  586577 retry.go:31] will retry after 1.342422822s: waiting for machine to come up
	I1205 20:30:51.117592  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:51.118034  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:51.118065  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:51.117973  586577 retry.go:31] will retry after 1.575794078s: waiting for machine to come up
	I1205 20:30:51.203131  585113 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.027881984s)
	I1205 20:30:51.203193  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:51.415679  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:51.500984  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:51.598883  585113 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:30:51.598986  585113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:30:52.099206  585113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:30:52.599755  585113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:30:52.619189  585113 api_server.go:72] duration metric: took 1.020303049s to wait for apiserver process to appear ...
	I1205 20:30:52.619236  585113 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:30:52.619268  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:52.619903  585113 api_server.go:269] stopped: https://192.168.39.200:8443/healthz: Get "https://192.168.39.200:8443/healthz": dial tcp 192.168.39.200:8443: connect: connection refused
	I1205 20:30:53.119501  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:55.342363  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:30:55.342398  585113 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:30:55.342418  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:55.471683  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:30:55.471729  585113 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:30:55.619946  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:55.634855  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:30:55.634906  585113 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:30:56.119928  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:56.128358  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:30:56.128396  585113 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:30:56.620047  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:56.625869  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I1205 20:30:56.633658  585113 api_server.go:141] control plane version: v1.31.2
	I1205 20:30:56.633698  585113 api_server.go:131] duration metric: took 4.014451973s to wait for apiserver health ...
	I1205 20:30:56.633712  585113 cni.go:84] Creating CNI manager for ""
	I1205 20:30:56.633721  585113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:30:56.635658  585113 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:30:52.695389  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:52.695838  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:52.695868  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:52.695784  586577 retry.go:31] will retry after 2.377931285s: waiting for machine to come up
	I1205 20:30:55.076859  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:55.077428  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:55.077469  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:55.077377  586577 retry.go:31] will retry after 2.586837249s: waiting for machine to come up
	I1205 20:30:56.637276  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:30:56.649131  585113 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:30:56.670981  585113 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:30:56.682424  585113 system_pods.go:59] 8 kube-system pods found
	I1205 20:30:56.682497  585113 system_pods.go:61] "coredns-7c65d6cfc9-hrrjc" [43d8b550-f29d-4a84-a2fc-b456abc486c2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:30:56.682508  585113 system_pods.go:61] "etcd-embed-certs-789000" [99f232e4-1bc8-4f98-8bcf-8aa61d66158b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:30:56.682519  585113 system_pods.go:61] "kube-apiserver-embed-certs-789000" [d1d11749-0ddc-4172-aaa9-bca00c64c912] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:30:56.682528  585113 system_pods.go:61] "kube-controller-manager-embed-certs-789000" [b291c993-cd10-4d0f-8c3e-a6db726cf83a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:30:56.682536  585113 system_pods.go:61] "kube-proxy-h79dj" [80abe907-24e7-4001-90a6-f4d10fd9fc6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:30:56.682544  585113 system_pods.go:61] "kube-scheduler-embed-certs-789000" [490d7afa-24fd-43c8-8088-539bb7e1eb9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 20:30:56.682556  585113 system_pods.go:61] "metrics-server-6867b74b74-tlsjl" [cd1d73a4-27d1-4e68-b7d8-6da497fc4e53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:30:56.682570  585113 system_pods.go:61] "storage-provisioner" [3246e383-4f15-4222-a50c-c5b243fda12a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:30:56.682579  585113 system_pods.go:74] duration metric: took 11.566899ms to wait for pod list to return data ...
	I1205 20:30:56.682598  585113 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:30:56.687073  585113 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:30:56.687172  585113 node_conditions.go:123] node cpu capacity is 2
	I1205 20:30:56.687222  585113 node_conditions.go:105] duration metric: took 4.613225ms to run NodePressure ...
	I1205 20:30:56.687273  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:56.981686  585113 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 20:30:56.985944  585113 kubeadm.go:739] kubelet initialised
	I1205 20:30:56.985968  585113 kubeadm.go:740] duration metric: took 4.256434ms waiting for restarted kubelet to initialise ...
	I1205 20:30:56.985976  585113 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:30:56.991854  585113 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hrrjc" in "kube-system" namespace to be "Ready" ...
	I1205 20:30:58.997499  585113 pod_ready.go:103] pod "coredns-7c65d6cfc9-hrrjc" in "kube-system" namespace has status "Ready":"False"
	I1205 20:30:57.667200  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:57.667644  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:57.667681  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:57.667592  586577 retry.go:31] will retry after 2.856276116s: waiting for machine to come up
	I1205 20:31:00.525334  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:00.525796  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:31:00.525830  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:31:00.525740  586577 retry.go:31] will retry after 5.119761936s: waiting for machine to come up
	I1205 20:31:00.999102  585113 pod_ready.go:103] pod "coredns-7c65d6cfc9-hrrjc" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:01.500344  585113 pod_ready.go:93] pod "coredns-7c65d6cfc9-hrrjc" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:01.500371  585113 pod_ready.go:82] duration metric: took 4.508490852s for pod "coredns-7c65d6cfc9-hrrjc" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:01.500382  585113 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:03.506621  585113 pod_ready.go:103] pod "etcd-embed-certs-789000" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:05.007677  585113 pod_ready.go:93] pod "etcd-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:05.007703  585113 pod_ready.go:82] duration metric: took 3.507315826s for pod "etcd-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:05.007713  585113 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:05.646790  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.647230  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has current primary IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.647264  585602 main.go:141] libmachine: (old-k8s-version-386085) Found IP for machine: 192.168.72.144
	I1205 20:31:05.647278  585602 main.go:141] libmachine: (old-k8s-version-386085) Reserving static IP address...
	I1205 20:31:05.647796  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "old-k8s-version-386085", mac: "52:54:00:6a:06:a4", ip: "192.168.72.144"} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.647834  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | skip adding static IP to network mk-old-k8s-version-386085 - found existing host DHCP lease matching {name: "old-k8s-version-386085", mac: "52:54:00:6a:06:a4", ip: "192.168.72.144"}
	I1205 20:31:05.647856  585602 main.go:141] libmachine: (old-k8s-version-386085) Reserved static IP address: 192.168.72.144
	I1205 20:31:05.647872  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | Getting to WaitForSSH function...
	I1205 20:31:05.647889  585602 main.go:141] libmachine: (old-k8s-version-386085) Waiting for SSH to be available...
	I1205 20:31:05.650296  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.650610  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.650643  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.650742  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | Using SSH client type: external
	I1205 20:31:05.650779  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa (-rw-------)
	I1205 20:31:05.650816  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:31:05.650837  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | About to run SSH command:
	I1205 20:31:05.650851  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | exit 0
	I1205 20:31:05.776876  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | SSH cmd err, output: <nil>: 
	I1205 20:31:05.777311  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetConfigRaw
	I1205 20:31:05.777948  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:31:05.780609  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.781053  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.781091  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.781319  585602 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/config.json ...
	I1205 20:31:05.781585  585602 machine.go:93] provisionDockerMachine start ...
	I1205 20:31:05.781607  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:05.781942  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:05.784729  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.785155  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.785191  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.785326  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:05.785491  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:05.785659  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:05.785886  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:05.786078  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:05.786309  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:05.786323  585602 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:31:05.893034  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 20:31:05.893079  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetMachineName
	I1205 20:31:05.893388  585602 buildroot.go:166] provisioning hostname "old-k8s-version-386085"
	I1205 20:31:05.893426  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetMachineName
	I1205 20:31:05.893623  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:05.896484  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.896883  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.896910  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.897031  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:05.897252  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:05.897441  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:05.897615  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:05.897796  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:05.897965  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:05.897977  585602 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-386085 && echo "old-k8s-version-386085" | sudo tee /etc/hostname
	I1205 20:31:06.017910  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-386085
	
	I1205 20:31:06.017939  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.020956  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.021298  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.021332  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.021494  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.021678  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.021863  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.021995  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.022137  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:06.022325  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:06.022342  585602 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-386085' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-386085/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-386085' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:31:06.138200  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:31:06.138234  585602 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:31:06.138261  585602 buildroot.go:174] setting up certificates
	I1205 20:31:06.138274  585602 provision.go:84] configureAuth start
	I1205 20:31:06.138287  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetMachineName
	I1205 20:31:06.138588  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:31:06.141488  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.141909  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.141965  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.142096  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.144144  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.144720  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.144742  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.144951  585602 provision.go:143] copyHostCerts
	I1205 20:31:06.145020  585602 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:31:06.145031  585602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:31:06.145085  585602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:31:06.145206  585602 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:31:06.145219  585602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:31:06.145248  585602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:31:06.145335  585602 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:31:06.145346  585602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:31:06.145376  585602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:31:06.145452  585602 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-386085 san=[127.0.0.1 192.168.72.144 localhost minikube old-k8s-version-386085]
	I1205 20:31:06.276466  585602 provision.go:177] copyRemoteCerts
	I1205 20:31:06.276530  585602 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:31:06.276559  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.279218  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.279550  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.279578  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.279766  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.279990  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.280152  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.280317  585602 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:31:06.362479  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:31:06.387631  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1205 20:31:06.413110  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:31:06.437931  585602 provision.go:87] duration metric: took 299.641033ms to configureAuth
	I1205 20:31:06.437962  585602 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:31:06.438176  585602 config.go:182] Loaded profile config "old-k8s-version-386085": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 20:31:06.438272  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.441059  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.441413  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.441444  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.441655  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.441846  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.441992  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.442174  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.442379  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:06.442552  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:06.442568  585602 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:31:06.655666  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:31:06.655699  585602 machine.go:96] duration metric: took 874.099032ms to provisionDockerMachine
	I1205 20:31:06.655713  585602 start.go:293] postStartSetup for "old-k8s-version-386085" (driver="kvm2")
	I1205 20:31:06.655723  585602 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:31:06.655752  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.656082  585602 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:31:06.656115  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.658835  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.659178  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.659229  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.659378  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.659636  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.659808  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.659971  585602 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:31:06.744484  585602 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:31:06.749025  585602 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:31:06.749060  585602 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:31:06.749134  585602 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:31:06.749273  585602 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:31:06.749411  585602 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:31:06.760720  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:06.785449  585602 start.go:296] duration metric: took 129.720092ms for postStartSetup
	I1205 20:31:06.785500  585602 fix.go:56] duration metric: took 23.328073686s for fixHost
	I1205 20:31:06.785526  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.788417  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.788797  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.788828  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.789049  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.789296  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.789483  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.789688  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.789870  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:06.790046  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:06.790065  585602 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:31:06.897781  585929 start.go:364] duration metric: took 3m3.751494327s to acquireMachinesLock for "default-k8s-diff-port-942599"
	I1205 20:31:06.897847  585929 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:31:06.897858  585929 fix.go:54] fixHost starting: 
	I1205 20:31:06.898355  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:31:06.898419  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:31:06.916556  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40927
	I1205 20:31:06.917111  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:31:06.917648  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:31:06.917674  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:31:06.918014  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:31:06.918256  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:06.918402  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetState
	I1205 20:31:06.920077  585929 fix.go:112] recreateIfNeeded on default-k8s-diff-port-942599: state=Stopped err=<nil>
	I1205 20:31:06.920105  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	W1205 20:31:06.920257  585929 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 20:31:06.922145  585929 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-942599" ...
	I1205 20:31:06.923548  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Start
	I1205 20:31:06.923770  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Ensuring networks are active...
	I1205 20:31:06.924750  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Ensuring network default is active
	I1205 20:31:06.925240  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Ensuring network mk-default-k8s-diff-port-942599 is active
	I1205 20:31:06.925721  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Getting domain xml...
	I1205 20:31:06.926719  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Creating domain...
	I1205 20:31:06.897579  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430666.872047181
	
	I1205 20:31:06.897606  585602 fix.go:216] guest clock: 1733430666.872047181
	I1205 20:31:06.897615  585602 fix.go:229] Guest: 2024-12-05 20:31:06.872047181 +0000 UTC Remote: 2024-12-05 20:31:06.785506394 +0000 UTC m=+234.970971247 (delta=86.540787ms)
	I1205 20:31:06.897679  585602 fix.go:200] guest clock delta is within tolerance: 86.540787ms
	I1205 20:31:06.897691  585602 start.go:83] releasing machines lock for "old-k8s-version-386085", held for 23.440303187s
	I1205 20:31:06.897727  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.898085  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:31:06.901127  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.901530  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.901567  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.901719  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.902413  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.902626  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.902776  585602 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:31:06.902827  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.902878  585602 ssh_runner.go:195] Run: cat /version.json
	I1205 20:31:06.902903  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.905664  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.905912  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.906050  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.906086  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.906256  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.906341  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.906367  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.906411  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.906517  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.906613  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.906684  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.906837  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.906849  585602 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:31:06.907112  585602 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:31:06.986078  585602 ssh_runner.go:195] Run: systemctl --version
	I1205 20:31:07.009500  585602 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:31:07.159146  585602 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:31:07.166263  585602 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:31:07.166358  585602 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:31:07.186021  585602 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:31:07.186063  585602 start.go:495] detecting cgroup driver to use...
	I1205 20:31:07.186140  585602 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:31:07.205074  585602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:31:07.221207  585602 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:31:07.221268  585602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:31:07.236669  585602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:31:07.252848  585602 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:31:07.369389  585602 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:31:07.504993  585602 docker.go:233] disabling docker service ...
	I1205 20:31:07.505101  585602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:31:07.523294  585602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:31:07.538595  585602 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:31:07.687830  585602 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:31:07.816176  585602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:31:07.833624  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:31:07.853409  585602 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1205 20:31:07.853478  585602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:07.865346  585602 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:31:07.865426  585602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:07.877962  585602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:07.889255  585602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:07.901632  585602 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:31:07.916169  585602 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:31:07.927092  585602 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:31:07.927169  585602 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:31:07.942288  585602 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:31:07.953314  585602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:08.092156  585602 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:31:08.205715  585602 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:31:08.205799  585602 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:31:08.214280  585602 start.go:563] Will wait 60s for crictl version
	I1205 20:31:08.214351  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:08.220837  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:31:08.265983  585602 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:31:08.266065  585602 ssh_runner.go:195] Run: crio --version
	I1205 20:31:08.295839  585602 ssh_runner.go:195] Run: crio --version
	I1205 20:31:08.327805  585602 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1205 20:31:07.014634  585113 pod_ready.go:103] pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:08.018024  585113 pod_ready.go:93] pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:08.018062  585113 pod_ready.go:82] duration metric: took 3.010340127s for pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.018080  585113 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.024700  585113 pod_ready.go:93] pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:08.024731  585113 pod_ready.go:82] duration metric: took 6.639434ms for pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.024744  585113 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-h79dj" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.030379  585113 pod_ready.go:93] pod "kube-proxy-h79dj" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:08.030399  585113 pod_ready.go:82] duration metric: took 5.648086ms for pod "kube-proxy-h79dj" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.030408  585113 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.036191  585113 pod_ready.go:93] pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:08.036211  585113 pod_ready.go:82] duration metric: took 5.797344ms for pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.036223  585113 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:10.051737  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:08.329278  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:31:08.332352  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:08.332700  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:08.332747  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:08.332930  585602 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1205 20:31:08.337611  585602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:08.350860  585602 kubeadm.go:883] updating cluster {Name:old-k8s-version-386085 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-386085 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:31:08.351016  585602 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 20:31:08.351090  585602 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:08.403640  585602 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 20:31:08.403716  585602 ssh_runner.go:195] Run: which lz4
	I1205 20:31:08.408211  585602 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 20:31:08.413136  585602 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:31:08.413168  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1205 20:31:10.209351  585602 crio.go:462] duration metric: took 1.801169802s to copy over tarball
	I1205 20:31:10.209438  585602 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:31:08.255781  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting to get IP...
	I1205 20:31:08.256721  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.257183  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.257262  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:08.257164  586715 retry.go:31] will retry after 301.077952ms: waiting for machine to come up
	I1205 20:31:08.559682  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.560187  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.560216  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:08.560130  586715 retry.go:31] will retry after 364.457823ms: waiting for machine to come up
	I1205 20:31:08.926774  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.927371  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.927401  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:08.927274  586715 retry.go:31] will retry after 461.958198ms: waiting for machine to come up
	I1205 20:31:09.390861  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:09.391502  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:09.391531  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:09.391432  586715 retry.go:31] will retry after 587.049038ms: waiting for machine to come up
	I1205 20:31:09.980451  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:09.980999  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:09.981026  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:09.980932  586715 retry.go:31] will retry after 499.551949ms: waiting for machine to come up
	I1205 20:31:10.482653  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:10.483188  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:10.483219  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:10.483135  586715 retry.go:31] will retry after 749.476034ms: waiting for machine to come up
	I1205 20:31:11.233788  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:11.234286  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:11.234315  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:11.234227  586715 retry.go:31] will retry after 768.81557ms: waiting for machine to come up
	I1205 20:31:12.004904  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:12.005427  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:12.005460  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:12.005382  586715 retry.go:31] will retry after 1.360132177s: waiting for machine to come up
	I1205 20:31:12.549406  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:15.043540  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:13.303553  585602 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.094044744s)
	I1205 20:31:13.303598  585602 crio.go:469] duration metric: took 3.094215888s to extract the tarball
	I1205 20:31:13.303610  585602 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:31:13.350989  585602 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:13.388660  585602 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 20:31:13.388702  585602 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 20:31:13.388814  585602 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:13.388822  585602 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.388832  585602 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.388853  585602 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.388881  585602 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.388904  585602 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1205 20:31:13.388823  585602 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.388859  585602 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.390414  585602 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1205 20:31:13.390924  585602 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.390941  585602 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.390924  585602 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.391016  585602 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.390927  585602 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.391373  585602 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.391378  585602 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:13.565006  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.577450  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1205 20:31:13.584653  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.597086  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.619848  585602 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1205 20:31:13.619899  585602 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.619955  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.623277  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.628407  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.697151  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.703111  585602 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1205 20:31:13.703167  585602 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1205 20:31:13.703219  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.736004  585602 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1205 20:31:13.736059  585602 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.736058  585602 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1205 20:31:13.736078  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.736094  585602 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.736104  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.736135  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.736187  585602 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1205 20:31:13.736207  585602 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.736235  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.783651  585602 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1205 20:31:13.783706  585602 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.783758  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.787597  585602 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1205 20:31:13.787649  585602 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.787656  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 20:31:13.787692  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.828445  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.828491  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.828544  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.828573  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.828616  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.828635  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.890937  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 20:31:13.992480  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.992480  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.992600  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.992661  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.992725  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.992780  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:14.095364  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 20:31:14.095462  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:14.163224  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1205 20:31:14.163320  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:14.163339  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:14.163420  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 20:31:14.163510  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:14.243805  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1205 20:31:14.243860  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1205 20:31:14.243881  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1205 20:31:14.287718  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1205 20:31:14.290994  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1205 20:31:14.291049  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1205 20:31:14.579648  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:14.728232  585602 cache_images.go:92] duration metric: took 1.339506459s to LoadCachedImages
	W1205 20:31:14.728389  585602 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I1205 20:31:14.728417  585602 kubeadm.go:934] updating node { 192.168.72.144 8443 v1.20.0 crio true true} ...
	I1205 20:31:14.728570  585602 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-386085 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386085 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:31:14.728672  585602 ssh_runner.go:195] Run: crio config
	I1205 20:31:14.778932  585602 cni.go:84] Creating CNI manager for ""
	I1205 20:31:14.778957  585602 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:31:14.778967  585602 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:31:14.778987  585602 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.144 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-386085 NodeName:old-k8s-version-386085 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1205 20:31:14.779131  585602 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-386085"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:31:14.779196  585602 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1205 20:31:14.792400  585602 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:31:14.792494  585602 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:31:14.802873  585602 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1205 20:31:14.821562  585602 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:31:14.839442  585602 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1205 20:31:14.861314  585602 ssh_runner.go:195] Run: grep 192.168.72.144	control-plane.minikube.internal$ /etc/hosts
	I1205 20:31:14.865457  585602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:14.878278  585602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:15.002193  585602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:31:15.030699  585602 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085 for IP: 192.168.72.144
	I1205 20:31:15.030734  585602 certs.go:194] generating shared ca certs ...
	I1205 20:31:15.030758  585602 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:31:15.030975  585602 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:31:15.031027  585602 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:31:15.031048  585602 certs.go:256] generating profile certs ...
	I1205 20:31:15.031206  585602 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.key
	I1205 20:31:15.031276  585602 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.key.87b35b18
	I1205 20:31:15.031324  585602 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.key
	I1205 20:31:15.031489  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:31:15.031535  585602 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:31:15.031550  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:31:15.031581  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:31:15.031612  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:31:15.031644  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:31:15.031698  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:15.032410  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:31:15.063090  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:31:15.094212  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:31:15.124685  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:31:15.159953  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1205 20:31:15.204250  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:31:15.237483  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:31:15.276431  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:31:15.303774  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:31:15.328872  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:31:15.353852  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:31:15.380916  585602 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:31:15.401082  585602 ssh_runner.go:195] Run: openssl version
	I1205 20:31:15.407442  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:31:15.420377  585602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:15.425721  585602 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:15.425800  585602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:15.432475  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:31:15.446140  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:31:15.459709  585602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:31:15.465165  585602 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:31:15.465241  585602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:31:15.471609  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:31:15.484139  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:31:15.496636  585602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:31:15.501575  585602 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:31:15.501634  585602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:31:15.507814  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:31:15.521234  585602 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:31:15.526452  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:31:15.532999  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:31:15.540680  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:31:15.547455  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:31:15.553996  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:31:15.560574  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:31:15.568489  585602 kubeadm.go:392] StartCluster: {Name:old-k8s-version-386085 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-386085 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:31:15.568602  585602 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:31:15.568682  585602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:31:15.610693  585602 cri.go:89] found id: ""
	I1205 20:31:15.610808  585602 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:31:15.622685  585602 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 20:31:15.622709  585602 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 20:31:15.622764  585602 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:31:15.633754  585602 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:31:15.634922  585602 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-386085" does not appear in /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:31:15.635682  585602 kubeconfig.go:62] /home/jenkins/minikube-integration/20052-530897/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-386085" cluster setting kubeconfig missing "old-k8s-version-386085" context setting]
	I1205 20:31:15.636878  585602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:31:15.719767  585602 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:31:15.731576  585602 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.144
	I1205 20:31:15.731622  585602 kubeadm.go:1160] stopping kube-system containers ...
	I1205 20:31:15.731639  585602 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:31:15.731705  585602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:31:15.777769  585602 cri.go:89] found id: ""
	I1205 20:31:15.777875  585602 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:31:15.797121  585602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:31:15.807961  585602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:31:15.807991  585602 kubeadm.go:157] found existing configuration files:
	
	I1205 20:31:15.808042  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:31:15.818177  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:31:15.818270  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:31:15.829092  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:31:15.839471  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:31:15.839564  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:31:15.850035  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:31:15.859907  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:31:15.859984  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:31:15.870882  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:31:15.881475  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:31:15.881549  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:31:15.892078  585602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:31:15.904312  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:16.042308  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:16.787487  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:13.367666  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:13.368154  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:13.368185  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:13.368096  586715 retry.go:31] will retry after 1.319101375s: waiting for machine to come up
	I1205 20:31:14.689562  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:14.690039  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:14.690067  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:14.689996  586715 retry.go:31] will retry after 2.267379471s: waiting for machine to come up
	I1205 20:31:16.959412  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:16.959882  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:16.959915  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:16.959804  586715 retry.go:31] will retry after 2.871837018s: waiting for machine to come up
	I1205 20:31:17.044878  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:19.543265  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:17.036864  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:17.128855  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:17.219276  585602 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:31:17.219380  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:17.720206  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:18.219623  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:18.719555  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:19.219776  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:19.719967  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:20.219686  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:20.719806  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:21.219875  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:21.719915  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:19.834750  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:19.835299  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:19.835326  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:19.835203  586715 retry.go:31] will retry after 2.740879193s: waiting for machine to come up
	I1205 20:31:22.577264  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:22.577746  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:22.577775  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:22.577709  586715 retry.go:31] will retry after 3.807887487s: waiting for machine to come up
	I1205 20:31:22.043635  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:24.543255  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:22.219930  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:22.719848  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:23.219674  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:23.719903  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:24.220505  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:24.719726  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:25.220161  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:25.720115  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:26.220399  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:26.719567  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:27.669618  585025 start.go:364] duration metric: took 59.106849765s to acquireMachinesLock for "no-preload-816185"
	I1205 20:31:27.669680  585025 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:31:27.669689  585025 fix.go:54] fixHost starting: 
	I1205 20:31:27.670111  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:31:27.670153  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:31:27.689600  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40519
	I1205 20:31:27.690043  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:31:27.690508  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:31:27.690530  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:31:27.690931  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:31:27.691146  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:27.691279  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetState
	I1205 20:31:27.692881  585025 fix.go:112] recreateIfNeeded on no-preload-816185: state=Stopped err=<nil>
	I1205 20:31:27.692905  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	W1205 20:31:27.693059  585025 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 20:31:27.694833  585025 out.go:177] * Restarting existing kvm2 VM for "no-preload-816185" ...
	I1205 20:31:26.389296  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.389828  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Found IP for machine: 192.168.50.96
	I1205 20:31:26.389866  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has current primary IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.389876  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Reserving static IP address...
	I1205 20:31:26.390321  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Reserved static IP address: 192.168.50.96
	I1205 20:31:26.390354  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for SSH to be available...
	I1205 20:31:26.390380  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-942599", mac: "52:54:00:f6:dd:0f", ip: "192.168.50.96"} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.390404  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | skip adding static IP to network mk-default-k8s-diff-port-942599 - found existing host DHCP lease matching {name: "default-k8s-diff-port-942599", mac: "52:54:00:f6:dd:0f", ip: "192.168.50.96"}
	I1205 20:31:26.390420  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Getting to WaitForSSH function...
	I1205 20:31:26.392509  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.392875  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.392912  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.392933  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Using SSH client type: external
	I1205 20:31:26.392988  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa (-rw-------)
	I1205 20:31:26.393057  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:31:26.393086  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | About to run SSH command:
	I1205 20:31:26.393105  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | exit 0
	I1205 20:31:26.520867  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | SSH cmd err, output: <nil>: 
	I1205 20:31:26.521212  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetConfigRaw
	I1205 20:31:26.521857  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetIP
	I1205 20:31:26.524512  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.524853  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.524883  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.525141  585929 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/config.json ...
	I1205 20:31:26.525404  585929 machine.go:93] provisionDockerMachine start ...
	I1205 20:31:26.525425  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:26.525639  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:26.527806  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.528094  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.528121  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.528257  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:26.528474  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.528635  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.528771  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:26.528902  585929 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:26.529132  585929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I1205 20:31:26.529147  585929 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:31:26.645385  585929 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 20:31:26.645429  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetMachineName
	I1205 20:31:26.645719  585929 buildroot.go:166] provisioning hostname "default-k8s-diff-port-942599"
	I1205 20:31:26.645751  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetMachineName
	I1205 20:31:26.645962  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:26.648906  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.649316  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.649346  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.649473  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:26.649686  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.649880  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.649998  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:26.650161  585929 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:26.650338  585929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I1205 20:31:26.650354  585929 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-942599 && echo "default-k8s-diff-port-942599" | sudo tee /etc/hostname
	I1205 20:31:26.780217  585929 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-942599
	
	I1205 20:31:26.780253  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:26.783240  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.783628  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.783660  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.783804  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:26.783997  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.784162  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.784321  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:26.784530  585929 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:26.784747  585929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I1205 20:31:26.784766  585929 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-942599' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-942599/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-942599' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:31:26.909975  585929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:31:26.910006  585929 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:31:26.910087  585929 buildroot.go:174] setting up certificates
	I1205 20:31:26.910101  585929 provision.go:84] configureAuth start
	I1205 20:31:26.910114  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetMachineName
	I1205 20:31:26.910440  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetIP
	I1205 20:31:26.913667  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.914067  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.914094  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.914321  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:26.917031  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.917430  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.917462  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.917608  585929 provision.go:143] copyHostCerts
	I1205 20:31:26.917681  585929 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:31:26.917706  585929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:31:26.917772  585929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:31:26.917889  585929 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:31:26.917900  585929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:31:26.917935  585929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:31:26.918013  585929 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:31:26.918023  585929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:31:26.918065  585929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:31:26.918163  585929 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-942599 san=[127.0.0.1 192.168.50.96 default-k8s-diff-port-942599 localhost minikube]
	I1205 20:31:27.003691  585929 provision.go:177] copyRemoteCerts
	I1205 20:31:27.003783  585929 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:31:27.003821  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.006311  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.006632  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.006665  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.006820  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.007011  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.007153  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.007274  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:31:27.094973  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:31:27.121684  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1205 20:31:27.146420  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:31:27.171049  585929 provision.go:87] duration metric: took 260.930345ms to configureAuth
	I1205 20:31:27.171083  585929 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:31:27.171268  585929 config.go:182] Loaded profile config "default-k8s-diff-port-942599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:31:27.171385  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.174287  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.174677  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.174717  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.174946  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.175168  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.175338  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.175531  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.175703  585929 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:27.175927  585929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I1205 20:31:27.175959  585929 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:31:27.416697  585929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:31:27.416724  585929 machine.go:96] duration metric: took 891.305367ms to provisionDockerMachine
	I1205 20:31:27.416737  585929 start.go:293] postStartSetup for "default-k8s-diff-port-942599" (driver="kvm2")
	I1205 20:31:27.416748  585929 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:31:27.416786  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:27.417143  585929 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:31:27.417183  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.419694  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.420041  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.420072  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.420259  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.420488  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.420681  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.420813  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:31:27.507592  585929 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:31:27.512178  585929 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:31:27.512209  585929 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:31:27.512297  585929 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:31:27.512416  585929 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:31:27.512544  585929 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:31:27.522860  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:27.550167  585929 start.go:296] duration metric: took 133.414654ms for postStartSetup
	I1205 20:31:27.550211  585929 fix.go:56] duration metric: took 20.652352836s for fixHost
	I1205 20:31:27.550240  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.553056  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.553456  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.553490  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.553631  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.553822  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.554007  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.554166  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.554372  585929 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:27.554584  585929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I1205 20:31:27.554603  585929 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:31:27.669428  585929 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430687.619179277
	
	I1205 20:31:27.669455  585929 fix.go:216] guest clock: 1733430687.619179277
	I1205 20:31:27.669467  585929 fix.go:229] Guest: 2024-12-05 20:31:27.619179277 +0000 UTC Remote: 2024-12-05 20:31:27.550217419 +0000 UTC m=+204.551998169 (delta=68.961858ms)
	I1205 20:31:27.669506  585929 fix.go:200] guest clock delta is within tolerance: 68.961858ms
	I1205 20:31:27.669514  585929 start.go:83] releasing machines lock for "default-k8s-diff-port-942599", held for 20.771694403s
	I1205 20:31:27.669559  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:27.669877  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetIP
	I1205 20:31:27.672547  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.672978  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.673009  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.673224  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:27.673788  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:27.673992  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:27.674125  585929 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:31:27.674176  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.674201  585929 ssh_runner.go:195] Run: cat /version.json
	I1205 20:31:27.674231  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.677006  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.677388  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.677418  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.677437  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.677565  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.677745  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.677919  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.677925  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.677948  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.678115  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.678107  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:31:27.678258  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.678382  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.678527  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:31:27.790786  585929 ssh_runner.go:195] Run: systemctl --version
	I1205 20:31:27.797092  585929 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:31:27.946053  585929 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:31:27.953979  585929 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:31:27.954073  585929 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:31:27.975059  585929 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:31:27.975090  585929 start.go:495] detecting cgroup driver to use...
	I1205 20:31:27.975160  585929 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:31:27.991738  585929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:31:28.006412  585929 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:31:28.006529  585929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:31:28.021329  585929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:31:28.037390  585929 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:31:28.155470  585929 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:31:28.326332  585929 docker.go:233] disabling docker service ...
	I1205 20:31:28.326415  585929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:31:28.343299  585929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:31:28.358147  585929 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:31:28.493547  585929 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:31:28.631184  585929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:31:28.647267  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:31:28.670176  585929 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:31:28.670269  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.686230  585929 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:31:28.686312  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.702991  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.715390  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.731909  585929 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:31:28.745042  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.757462  585929 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.779049  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.790960  585929 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:31:28.806652  585929 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:31:28.806724  585929 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:31:28.821835  585929 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:31:28.832688  585929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:28.967877  585929 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:31:29.084571  585929 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:31:29.084666  585929 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:31:29.089892  585929 start.go:563] Will wait 60s for crictl version
	I1205 20:31:29.089958  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:31:29.094021  585929 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:31:29.132755  585929 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:31:29.132843  585929 ssh_runner.go:195] Run: crio --version
	I1205 20:31:29.161779  585929 ssh_runner.go:195] Run: crio --version
	I1205 20:31:29.194415  585929 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:31:27.042893  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:29.545284  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:27.696342  585025 main.go:141] libmachine: (no-preload-816185) Calling .Start
	I1205 20:31:27.696546  585025 main.go:141] libmachine: (no-preload-816185) Ensuring networks are active...
	I1205 20:31:27.697272  585025 main.go:141] libmachine: (no-preload-816185) Ensuring network default is active
	I1205 20:31:27.697720  585025 main.go:141] libmachine: (no-preload-816185) Ensuring network mk-no-preload-816185 is active
	I1205 20:31:27.698153  585025 main.go:141] libmachine: (no-preload-816185) Getting domain xml...
	I1205 20:31:27.698993  585025 main.go:141] libmachine: (no-preload-816185) Creating domain...
	I1205 20:31:29.005551  585025 main.go:141] libmachine: (no-preload-816185) Waiting to get IP...
	I1205 20:31:29.006633  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:29.007124  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:29.007217  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:29.007100  586921 retry.go:31] will retry after 264.716976ms: waiting for machine to come up
	I1205 20:31:29.273821  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:29.274364  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:29.274393  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:29.274318  586921 retry.go:31] will retry after 307.156436ms: waiting for machine to come up
	I1205 20:31:29.582968  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:29.583583  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:29.583621  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:29.583531  586921 retry.go:31] will retry after 335.63624ms: waiting for machine to come up
	I1205 20:31:29.921262  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:29.921823  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:29.921855  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:29.921771  586921 retry.go:31] will retry after 577.408278ms: waiting for machine to come up
	I1205 20:31:30.500556  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:30.501058  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:30.501095  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:30.500999  586921 retry.go:31] will retry after 757.019094ms: waiting for machine to come up
	I1205 20:31:27.220124  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:27.719460  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:28.220187  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:28.719599  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:29.219672  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:29.720450  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:30.220436  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:30.719573  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:31.220357  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:31.720052  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:29.195845  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetIP
	I1205 20:31:29.198779  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:29.199138  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:29.199171  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:29.199365  585929 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1205 20:31:29.204553  585929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:29.217722  585929 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-942599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-942599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.96 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:31:29.217873  585929 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:31:29.217943  585929 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:29.259006  585929 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 20:31:29.259105  585929 ssh_runner.go:195] Run: which lz4
	I1205 20:31:29.264049  585929 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 20:31:29.268978  585929 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:31:29.269019  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 20:31:30.811247  585929 crio.go:462] duration metric: took 1.547244528s to copy over tarball
	I1205 20:31:30.811340  585929 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:31:32.043543  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:34.044420  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:31.260083  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:31.260626  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:31.260658  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:31.260593  586921 retry.go:31] will retry after 593.111543ms: waiting for machine to come up
	I1205 20:31:31.854850  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:31.855286  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:31.855316  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:31.855224  586921 retry.go:31] will retry after 832.693762ms: waiting for machine to come up
	I1205 20:31:32.690035  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:32.690489  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:32.690515  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:32.690448  586921 retry.go:31] will retry after 1.128242733s: waiting for machine to come up
	I1205 20:31:33.820162  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:33.820798  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:33.820831  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:33.820732  586921 retry.go:31] will retry after 1.331730925s: waiting for machine to come up
	I1205 20:31:35.154230  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:35.154661  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:35.154690  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:35.154590  586921 retry.go:31] will retry after 2.19623815s: waiting for machine to come up
	I1205 20:31:32.220318  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:32.719780  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:33.220114  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:33.719554  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:34.220187  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:34.720021  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:35.219461  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:35.720334  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.219480  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.720159  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:33.093756  585929 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.282380101s)
	I1205 20:31:33.093791  585929 crio.go:469] duration metric: took 2.282510298s to extract the tarball
	I1205 20:31:33.093802  585929 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:31:33.132232  585929 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:33.188834  585929 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:31:33.188868  585929 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:31:33.188879  585929 kubeadm.go:934] updating node { 192.168.50.96 8444 v1.31.2 crio true true} ...
	I1205 20:31:33.189027  585929 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-942599 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-942599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:31:33.189114  585929 ssh_runner.go:195] Run: crio config
	I1205 20:31:33.235586  585929 cni.go:84] Creating CNI manager for ""
	I1205 20:31:33.235611  585929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:31:33.235621  585929 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:31:33.235644  585929 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.96 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-942599 NodeName:default-k8s-diff-port-942599 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:31:33.235770  585929 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.96
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-942599"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.96"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.96"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:31:33.235835  585929 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:31:33.246737  585929 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:31:33.246829  585929 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:31:33.257763  585929 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1205 20:31:33.276025  585929 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:31:33.294008  585929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1205 20:31:33.311640  585929 ssh_runner.go:195] Run: grep 192.168.50.96	control-plane.minikube.internal$ /etc/hosts
	I1205 20:31:33.315963  585929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.96	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:33.328834  585929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:33.439221  585929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:31:33.457075  585929 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599 for IP: 192.168.50.96
	I1205 20:31:33.457103  585929 certs.go:194] generating shared ca certs ...
	I1205 20:31:33.457131  585929 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:31:33.457337  585929 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:31:33.457407  585929 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:31:33.457420  585929 certs.go:256] generating profile certs ...
	I1205 20:31:33.457528  585929 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/client.key
	I1205 20:31:33.457612  585929 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/apiserver.key.d50b8fb2
	I1205 20:31:33.457668  585929 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/proxy-client.key
	I1205 20:31:33.457824  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:31:33.457870  585929 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:31:33.457885  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:31:33.457924  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:31:33.457959  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:31:33.457989  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:31:33.458044  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:33.459092  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:31:33.502129  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:31:33.533461  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:31:33.572210  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:31:33.597643  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1205 20:31:33.621382  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:31:33.648568  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:31:33.682320  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:31:33.707415  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:31:33.733418  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:31:33.760333  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:31:33.794070  585929 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:31:33.813531  585929 ssh_runner.go:195] Run: openssl version
	I1205 20:31:33.820336  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:31:33.832321  585929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:33.839066  585929 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:33.839135  585929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:33.845526  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:31:33.857376  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:31:33.868864  585929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:31:33.873732  585929 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:31:33.873799  585929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:31:33.881275  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:31:33.893144  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:31:33.904679  585929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:31:33.909686  585929 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:31:33.909760  585929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:31:33.915937  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:31:33.927401  585929 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:31:33.932326  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:31:33.939165  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:31:33.945630  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:31:33.951867  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:31:33.957857  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:31:33.963994  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:31:33.969964  585929 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-942599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-942599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.96 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:31:33.970050  585929 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:31:33.970103  585929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:31:34.016733  585929 cri.go:89] found id: ""
	I1205 20:31:34.016814  585929 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:31:34.027459  585929 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 20:31:34.027478  585929 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 20:31:34.027523  585929 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:31:34.037483  585929 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:31:34.038588  585929 kubeconfig.go:125] found "default-k8s-diff-port-942599" server: "https://192.168.50.96:8444"
	I1205 20:31:34.041140  585929 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:31:34.050903  585929 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.96
	I1205 20:31:34.050938  585929 kubeadm.go:1160] stopping kube-system containers ...
	I1205 20:31:34.050956  585929 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:31:34.051014  585929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:31:34.090840  585929 cri.go:89] found id: ""
	I1205 20:31:34.090932  585929 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:31:34.107686  585929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:31:34.118277  585929 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:31:34.118305  585929 kubeadm.go:157] found existing configuration files:
	
	I1205 20:31:34.118359  585929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1205 20:31:34.127654  585929 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:31:34.127733  585929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:31:34.137295  585929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1205 20:31:34.147005  585929 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:31:34.147076  585929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:31:34.158576  585929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1205 20:31:34.167933  585929 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:31:34.168022  585929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:31:34.177897  585929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1205 20:31:34.187467  585929 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:31:34.187539  585929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:31:34.197825  585929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:31:34.210775  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:34.337491  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:35.308389  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:35.549708  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:35.624390  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:35.706794  585929 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:31:35.706912  585929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.207620  585929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.707990  585929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.727214  585929 api_server.go:72] duration metric: took 1.020418782s to wait for apiserver process to appear ...
	I1205 20:31:36.727257  585929 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:31:36.727289  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:36.727908  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": dial tcp 192.168.50.96:8444: connect: connection refused
	I1205 20:31:37.228102  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:36.544564  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:39.043806  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:37.352371  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:37.352911  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:37.352946  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:37.352862  586921 retry.go:31] will retry after 2.333670622s: waiting for machine to come up
	I1205 20:31:39.688034  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:39.688597  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:39.688630  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:39.688537  586921 retry.go:31] will retry after 2.476657304s: waiting for machine to come up
	I1205 20:31:37.219933  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:37.720360  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:38.219574  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:38.720034  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:39.219449  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:39.719752  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:40.219718  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:40.719771  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:41.219548  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:41.720381  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:42.228416  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:31:42.228489  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:41.044569  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:43.542439  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:45.543063  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:42.168384  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:42.168759  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:42.168781  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:42.168719  586921 retry.go:31] will retry after 3.531210877s: waiting for machine to come up
	I1205 20:31:45.701387  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.701831  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has current primary IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.701868  585025 main.go:141] libmachine: (no-preload-816185) Found IP for machine: 192.168.61.37
	I1205 20:31:45.701882  585025 main.go:141] libmachine: (no-preload-816185) Reserving static IP address...
	I1205 20:31:45.702270  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "no-preload-816185", mac: "52:54:00:5f:85:a7", ip: "192.168.61.37"} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:45.702313  585025 main.go:141] libmachine: (no-preload-816185) DBG | skip adding static IP to network mk-no-preload-816185 - found existing host DHCP lease matching {name: "no-preload-816185", mac: "52:54:00:5f:85:a7", ip: "192.168.61.37"}
	I1205 20:31:45.702327  585025 main.go:141] libmachine: (no-preload-816185) Reserved static IP address: 192.168.61.37
	I1205 20:31:45.702343  585025 main.go:141] libmachine: (no-preload-816185) Waiting for SSH to be available...
	I1205 20:31:45.702355  585025 main.go:141] libmachine: (no-preload-816185) DBG | Getting to WaitForSSH function...
	I1205 20:31:45.704606  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.704941  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:45.704964  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.705115  585025 main.go:141] libmachine: (no-preload-816185) DBG | Using SSH client type: external
	I1205 20:31:45.705146  585025 main.go:141] libmachine: (no-preload-816185) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa (-rw-------)
	I1205 20:31:45.705181  585025 main.go:141] libmachine: (no-preload-816185) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.37 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:31:45.705212  585025 main.go:141] libmachine: (no-preload-816185) DBG | About to run SSH command:
	I1205 20:31:45.705224  585025 main.go:141] libmachine: (no-preload-816185) DBG | exit 0
	I1205 20:31:45.828472  585025 main.go:141] libmachine: (no-preload-816185) DBG | SSH cmd err, output: <nil>: 
	I1205 20:31:45.828882  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetConfigRaw
	I1205 20:31:45.829596  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetIP
	I1205 20:31:45.832338  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.832643  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:45.832671  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.832970  585025 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/config.json ...
	I1205 20:31:45.833244  585025 machine.go:93] provisionDockerMachine start ...
	I1205 20:31:45.833275  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:45.833498  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:45.835937  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.836344  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:45.836375  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.836555  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:45.836744  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:45.836906  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:45.837046  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:45.837207  585025 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:45.837441  585025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.37 22 <nil> <nil>}
	I1205 20:31:45.837456  585025 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:31:45.940890  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 20:31:45.940926  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetMachineName
	I1205 20:31:45.941234  585025 buildroot.go:166] provisioning hostname "no-preload-816185"
	I1205 20:31:45.941262  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetMachineName
	I1205 20:31:45.941453  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:45.944124  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.944537  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:45.944585  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.944677  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:45.944862  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:45.945026  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:45.945169  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:45.945343  585025 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:45.945511  585025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.37 22 <nil> <nil>}
	I1205 20:31:45.945523  585025 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-816185 && echo "no-preload-816185" | sudo tee /etc/hostname
	I1205 20:31:42.220435  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:42.720366  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:43.219567  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:43.719652  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:44.220259  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:44.719556  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:45.219850  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:45.720302  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:46.220377  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:46.720107  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:47.229369  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:31:47.229421  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:46.063755  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-816185
	
	I1205 20:31:46.063794  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:46.066742  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.067177  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.067208  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.067371  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:46.067576  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.067756  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.067937  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:46.068147  585025 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:46.068392  585025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.37 22 <nil> <nil>}
	I1205 20:31:46.068411  585025 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-816185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-816185/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-816185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:31:46.182072  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:31:46.182110  585025 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:31:46.182144  585025 buildroot.go:174] setting up certificates
	I1205 20:31:46.182160  585025 provision.go:84] configureAuth start
	I1205 20:31:46.182172  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetMachineName
	I1205 20:31:46.182490  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetIP
	I1205 20:31:46.185131  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.185461  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.185493  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.185684  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:46.188070  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.188467  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.188499  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.188606  585025 provision.go:143] copyHostCerts
	I1205 20:31:46.188674  585025 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:31:46.188695  585025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:31:46.188753  585025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:31:46.188860  585025 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:31:46.188872  585025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:31:46.188892  585025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:31:46.188973  585025 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:31:46.188980  585025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:31:46.188998  585025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:31:46.189044  585025 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.no-preload-816185 san=[127.0.0.1 192.168.61.37 localhost minikube no-preload-816185]
	I1205 20:31:46.460195  585025 provision.go:177] copyRemoteCerts
	I1205 20:31:46.460323  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:31:46.460394  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:46.463701  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.464171  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.464224  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.464422  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:46.464646  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.464839  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:46.465024  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:31:46.557665  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 20:31:46.583225  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:31:46.608114  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:31:46.633059  585025 provision.go:87] duration metric: took 450.879004ms to configureAuth
	I1205 20:31:46.633100  585025 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:31:46.633319  585025 config.go:182] Loaded profile config "no-preload-816185": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:31:46.633400  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:46.636634  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.637103  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.637138  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.637368  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:46.637624  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.637841  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.638000  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:46.638189  585025 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:46.638425  585025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.37 22 <nil> <nil>}
	I1205 20:31:46.638442  585025 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:31:46.877574  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:31:46.877610  585025 machine.go:96] duration metric: took 1.044347044s to provisionDockerMachine
	I1205 20:31:46.877623  585025 start.go:293] postStartSetup for "no-preload-816185" (driver="kvm2")
	I1205 20:31:46.877634  585025 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:31:46.877668  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:46.878007  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:31:46.878046  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:46.881022  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.881361  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.881422  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.881554  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:46.881741  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.881883  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:46.882045  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:31:46.967997  585025 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:31:46.972667  585025 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:31:46.972697  585025 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:31:46.972770  585025 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:31:46.972844  585025 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:31:46.972931  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:31:46.983157  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:47.009228  585025 start.go:296] duration metric: took 131.588013ms for postStartSetup
	I1205 20:31:47.009272  585025 fix.go:56] duration metric: took 19.33958416s for fixHost
	I1205 20:31:47.009296  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:47.012039  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.012388  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:47.012416  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.012620  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:47.012858  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:47.013022  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:47.013166  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:47.013318  585025 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:47.013490  585025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.37 22 <nil> <nil>}
	I1205 20:31:47.013501  585025 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:31:47.117166  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430707.083043174
	
	I1205 20:31:47.117195  585025 fix.go:216] guest clock: 1733430707.083043174
	I1205 20:31:47.117203  585025 fix.go:229] Guest: 2024-12-05 20:31:47.083043174 +0000 UTC Remote: 2024-12-05 20:31:47.009275956 +0000 UTC m=+361.003271038 (delta=73.767218ms)
	I1205 20:31:47.117226  585025 fix.go:200] guest clock delta is within tolerance: 73.767218ms
	I1205 20:31:47.117232  585025 start.go:83] releasing machines lock for "no-preload-816185", held for 19.447576666s
	I1205 20:31:47.117259  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:47.117541  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetIP
	I1205 20:31:47.120283  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.120627  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:47.120653  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.120805  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:47.121301  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:47.121492  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:47.121612  585025 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:31:47.121656  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:47.121727  585025 ssh_runner.go:195] Run: cat /version.json
	I1205 20:31:47.121750  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:47.124146  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.124387  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.124503  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:47.124530  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.124723  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:47.124745  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.124745  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:47.124922  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:47.124933  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:47.125086  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:47.125126  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:47.125227  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:31:47.125505  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:47.125653  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:31:47.221731  585025 ssh_runner.go:195] Run: systemctl --version
	I1205 20:31:47.228177  585025 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:31:47.377695  585025 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:31:47.384534  585025 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:31:47.384623  585025 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:31:47.402354  585025 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:31:47.402388  585025 start.go:495] detecting cgroup driver to use...
	I1205 20:31:47.402454  585025 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:31:47.426593  585025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:31:47.443953  585025 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:31:47.444011  585025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:31:47.461107  585025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:31:47.477872  585025 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:31:47.617097  585025 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:31:47.780021  585025 docker.go:233] disabling docker service ...
	I1205 20:31:47.780140  585025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:31:47.795745  585025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:31:47.809573  585025 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:31:47.959910  585025 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:31:48.081465  585025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:31:48.096513  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:31:48.116342  585025 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:31:48.116409  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.128016  585025 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:31:48.128095  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.139511  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.151241  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.162858  585025 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:31:48.174755  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.185958  585025 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.203724  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.215682  585025 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:31:48.226478  585025 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:31:48.226551  585025 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:31:48.242781  585025 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:31:48.254921  585025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:48.373925  585025 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:31:48.471515  585025 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:31:48.471625  585025 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:31:48.477640  585025 start.go:563] Will wait 60s for crictl version
	I1205 20:31:48.477707  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:48.481862  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:31:48.521367  585025 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:31:48.521465  585025 ssh_runner.go:195] Run: crio --version
	I1205 20:31:48.552343  585025 ssh_runner.go:195] Run: crio --version
	I1205 20:31:48.583089  585025 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:31:48.043043  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:50.043172  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:48.584504  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetIP
	I1205 20:31:48.587210  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:48.587539  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:48.587568  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:48.587788  585025 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1205 20:31:48.592190  585025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:48.606434  585025 kubeadm.go:883] updating cluster {Name:no-preload-816185 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-816185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.37 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:31:48.606605  585025 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:31:48.606666  585025 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:48.642948  585025 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 20:31:48.642978  585025 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 20:31:48.643061  585025 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:48.643116  585025 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:48.643092  585025 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:48.643168  585025 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:48.643075  585025 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:48.643116  585025 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:48.643248  585025 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1205 20:31:48.643119  585025 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:48.644692  585025 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:48.644712  585025 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1205 20:31:48.644694  585025 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:48.644798  585025 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:48.644800  585025 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:48.644824  585025 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:48.644858  585025 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:48.644824  585025 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:48.811007  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:48.819346  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:48.859678  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1205 20:31:48.864065  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:48.864191  585025 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1205 20:31:48.864249  585025 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:48.864310  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:48.883959  585025 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1205 20:31:48.884022  585025 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:48.884078  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:48.902180  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:48.918167  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:48.946617  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:49.039706  585025 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1205 20:31:49.039760  585025 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:49.039783  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:49.039808  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:49.039869  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:49.039887  585025 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1205 20:31:49.039913  585025 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:49.039938  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:49.039947  585025 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1205 20:31:49.039969  585025 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:49.040001  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:49.040002  585025 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1205 20:31:49.040026  585025 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:49.040069  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:49.098900  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:49.098990  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:49.105551  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:49.105588  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:49.105612  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:49.105646  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:49.201473  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:49.218211  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:49.257277  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:49.257335  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:49.257345  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:49.257479  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:49.316037  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1205 20:31:49.316135  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:49.316159  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 20:31:49.356780  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1205 20:31:49.356906  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1205 20:31:49.382843  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:49.405772  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:49.405863  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:49.428491  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1205 20:31:49.428541  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1205 20:31:49.428563  585025 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 20:31:49.428587  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1205 20:31:49.428611  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 20:31:49.428648  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 20:31:49.487794  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1205 20:31:49.487825  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1205 20:31:49.487893  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1205 20:31:49.487917  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1205 20:31:49.487927  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 20:31:49.488022  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 20:31:49.830311  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:47.219913  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:47.720441  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:48.220220  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:48.719997  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:49.219843  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:49.719591  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:50.220132  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:50.719528  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:51.219674  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:51.720234  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:52.230527  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:31:52.230575  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:52.543415  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:55.042668  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:52.150499  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.721854606s)
	I1205 20:31:52.150547  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1205 20:31:52.150573  585025 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1205 20:31:52.150588  585025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.721911838s)
	I1205 20:31:52.150623  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1205 20:31:52.150627  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1205 20:31:52.150697  585025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: (2.662646854s)
	I1205 20:31:52.150727  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1205 20:31:52.150752  585025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.662648047s)
	I1205 20:31:52.150776  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1205 20:31:52.150785  585025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.662799282s)
	I1205 20:31:52.150804  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1205 20:31:52.150834  585025 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.320487562s)
	I1205 20:31:52.150874  585025 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1205 20:31:52.150907  585025 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:52.150943  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:55.858372  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.707687772s)
	I1205 20:31:55.858414  585025 ssh_runner.go:235] Completed: which crictl: (3.707446137s)
	I1205 20:31:55.858498  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:55.858426  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1205 20:31:55.858580  585025 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 20:31:55.858640  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 20:31:55.901375  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:52.219602  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:52.719522  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:53.220117  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:53.720426  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:54.220177  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:54.720100  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:55.219569  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:55.719796  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:56.219490  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:56.720420  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:57.231370  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:31:57.231415  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:57.612431  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": read tcp 192.168.50.1:36198->192.168.50.96:8444: read: connection reset by peer
	I1205 20:31:57.727638  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:57.728368  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": dial tcp 192.168.50.96:8444: connect: connection refused
	I1205 20:31:57.042989  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:59.043517  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:57.843623  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.984954959s)
	I1205 20:31:57.843662  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1205 20:31:57.843683  585025 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 20:31:57.843731  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 20:31:57.843732  585025 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.942323285s)
	I1205 20:31:57.843821  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:32:00.030765  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.186998467s)
	I1205 20:32:00.030810  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1205 20:32:00.030840  585025 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 20:32:00.030846  585025 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.18699947s)
	I1205 20:32:00.030897  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 20:32:00.030906  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 20:32:00.031026  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:31:57.219497  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:57.720337  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:58.219807  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:58.720112  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:59.219949  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:59.719626  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:00.219871  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:00.719466  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:01.219491  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:01.719760  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:58.227807  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:01.044658  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:03.542453  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:05.542887  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:01.486433  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.455500806s)
	I1205 20:32:01.486479  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1205 20:32:01.486512  585025 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1205 20:32:01.486513  585025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.455460879s)
	I1205 20:32:01.486589  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1205 20:32:01.486592  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1205 20:32:03.658906  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.172262326s)
	I1205 20:32:03.658947  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1205 20:32:03.658979  585025 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:32:03.659024  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:32:04.304774  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 20:32:04.304825  585025 cache_images.go:123] Successfully loaded all cached images
	I1205 20:32:04.304832  585025 cache_images.go:92] duration metric: took 15.661840579s to LoadCachedImages
	I1205 20:32:04.304846  585025 kubeadm.go:934] updating node { 192.168.61.37 8443 v1.31.2 crio true true} ...
	I1205 20:32:04.304983  585025 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-816185 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.37
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-816185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:32:04.305057  585025 ssh_runner.go:195] Run: crio config
	I1205 20:32:04.350303  585025 cni.go:84] Creating CNI manager for ""
	I1205 20:32:04.350332  585025 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:32:04.350352  585025 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:32:04.350383  585025 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.37 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-816185 NodeName:no-preload-816185 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.37"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.37 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:32:04.350534  585025 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.37
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-816185"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.37"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.37"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:32:04.350618  585025 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:32:04.362733  585025 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:32:04.362815  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:32:04.374219  585025 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1205 20:32:04.392626  585025 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:32:04.409943  585025 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I1205 20:32:04.428180  585025 ssh_runner.go:195] Run: grep 192.168.61.37	control-plane.minikube.internal$ /etc/hosts
	I1205 20:32:04.432433  585025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.37	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:32:04.447274  585025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:32:04.591755  585025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:32:04.609441  585025 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185 for IP: 192.168.61.37
	I1205 20:32:04.609472  585025 certs.go:194] generating shared ca certs ...
	I1205 20:32:04.609494  585025 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:32:04.609664  585025 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:32:04.609729  585025 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:32:04.609745  585025 certs.go:256] generating profile certs ...
	I1205 20:32:04.609910  585025 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/client.key
	I1205 20:32:04.609991  585025 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/apiserver.key.e9b85612
	I1205 20:32:04.610027  585025 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/proxy-client.key
	I1205 20:32:04.610146  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:32:04.610173  585025 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:32:04.610182  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:32:04.610216  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:32:04.610264  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:32:04.610313  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:32:04.610377  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:32:04.611264  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:32:04.642976  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:32:04.679840  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:32:04.707526  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:32:04.746333  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 20:32:04.782671  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:32:04.819333  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:32:04.845567  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:32:04.870304  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:32:04.894597  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:32:04.918482  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:32:04.942992  585025 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:32:04.960576  585025 ssh_runner.go:195] Run: openssl version
	I1205 20:32:04.966908  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:32:04.978238  585025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:32:04.982959  585025 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:32:04.983023  585025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:32:04.989070  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:32:05.000979  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:32:05.012901  585025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:32:05.017583  585025 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:32:05.018169  585025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:32:05.025450  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:32:05.037419  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:32:05.050366  585025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:32:05.055211  585025 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:32:05.055255  585025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:32:05.061388  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:32:05.074182  585025 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:32:05.079129  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:32:05.085580  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:32:05.091938  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:32:05.099557  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:32:05.105756  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:32:05.112019  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:32:05.118426  585025 kubeadm.go:392] StartCluster: {Name:no-preload-816185 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-816185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.37 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:32:05.118540  585025 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:32:05.118622  585025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:32:05.162731  585025 cri.go:89] found id: ""
	I1205 20:32:05.162821  585025 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:32:05.174100  585025 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 20:32:05.174127  585025 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 20:32:05.174181  585025 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:32:05.184949  585025 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:32:05.186127  585025 kubeconfig.go:125] found "no-preload-816185" server: "https://192.168.61.37:8443"
	I1205 20:32:05.188601  585025 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:32:05.198779  585025 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.37
	I1205 20:32:05.198815  585025 kubeadm.go:1160] stopping kube-system containers ...
	I1205 20:32:05.198828  585025 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:32:05.198881  585025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:32:05.241175  585025 cri.go:89] found id: ""
	I1205 20:32:05.241247  585025 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:32:05.259698  585025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:32:05.270282  585025 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:32:05.270310  585025 kubeadm.go:157] found existing configuration files:
	
	I1205 20:32:05.270370  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:32:05.280440  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:32:05.280519  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:32:05.290825  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:32:05.300680  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:32:05.300745  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:32:05.311108  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:32:05.320854  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:32:05.320918  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:32:05.331099  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:32:05.340948  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:32:05.341017  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:32:05.351280  585025 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:32:05.361567  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:05.477138  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:02.220337  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:02.720145  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:03.219463  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:03.719913  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:04.219813  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:04.719940  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:05.219830  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:05.720324  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:06.220287  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:06.719584  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:03.228372  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:32:03.228433  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:08.042416  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:10.043011  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:06.259256  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:06.483460  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:06.557633  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:06.666782  585025 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:32:06.666885  585025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:07.167840  585025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:07.667069  585025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:07.701559  585025 api_server.go:72] duration metric: took 1.034769472s to wait for apiserver process to appear ...
	I1205 20:32:07.701592  585025 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:32:07.701612  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:10.640462  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:32:10.640498  585025 api_server.go:103] status: https://192.168.61.37:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:32:10.640521  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:10.647093  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:32:10.647118  585025 api_server.go:103] status: https://192.168.61.37:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:32:10.702286  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:10.711497  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:10.711528  585025 api_server.go:103] status: https://192.168.61.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:07.219989  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:07.720289  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:08.220381  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:08.719947  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:09.219838  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:09.719666  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:10.219756  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:10.720312  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:11.220369  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:11.720004  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:11.202247  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:11.206625  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:11.206650  585025 api_server.go:103] status: https://192.168.61.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:11.702760  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:11.718941  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:11.718974  585025 api_server.go:103] status: https://192.168.61.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:12.202567  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:12.207589  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 200:
	ok
	I1205 20:32:12.214275  585025 api_server.go:141] control plane version: v1.31.2
	I1205 20:32:12.214304  585025 api_server.go:131] duration metric: took 4.512704501s to wait for apiserver health ...
	I1205 20:32:12.214314  585025 cni.go:84] Creating CNI manager for ""
	I1205 20:32:12.214321  585025 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:32:12.216193  585025 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:32:08.229499  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:32:08.229544  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:12.545378  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:15.043628  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:12.217640  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:32:12.241907  585025 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:32:12.262114  585025 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:32:12.275246  585025 system_pods.go:59] 8 kube-system pods found
	I1205 20:32:12.275296  585025 system_pods.go:61] "coredns-7c65d6cfc9-j2hr2" [9ce413ab-c304-40dd-af68-80f15db0e2ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:32:12.275308  585025 system_pods.go:61] "etcd-no-preload-816185" [ddc20062-02d9-4f9d-a2fb-fa2c7d6aa1cc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:32:12.275319  585025 system_pods.go:61] "kube-apiserver-no-preload-816185" [07ff76f2-b05e-4434-b8f9-448bc200507a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:32:12.275328  585025 system_pods.go:61] "kube-controller-manager-no-preload-816185" [7c701058-791a-4097-a913-f6989a791067] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:32:12.275340  585025 system_pods.go:61] "kube-proxy-rjp4j" [340e9ccc-0290-4d3d-829c-44ad65410f3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:32:12.275348  585025 system_pods.go:61] "kube-scheduler-no-preload-816185" [c2f3b04c-9e3a-4060-a6d0-fb9eb2aa5e55] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 20:32:12.275359  585025 system_pods.go:61] "metrics-server-6867b74b74-vjwq2" [47ff24fe-0edb-4d06-b280-a0d965b25dae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:32:12.275367  585025 system_pods.go:61] "storage-provisioner" [bd385e87-56ea-417c-a4a8-b8a6e4f94114] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:32:12.275376  585025 system_pods.go:74] duration metric: took 13.23725ms to wait for pod list to return data ...
	I1205 20:32:12.275387  585025 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:32:12.279719  585025 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:32:12.279746  585025 node_conditions.go:123] node cpu capacity is 2
	I1205 20:32:12.279755  585025 node_conditions.go:105] duration metric: took 4.364464ms to run NodePressure ...
	I1205 20:32:12.279774  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:12.562221  585025 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 20:32:12.566599  585025 kubeadm.go:739] kubelet initialised
	I1205 20:32:12.566627  585025 kubeadm.go:740] duration metric: took 4.374855ms waiting for restarted kubelet to initialise ...
	I1205 20:32:12.566639  585025 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:32:12.571780  585025 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-j2hr2" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:14.579614  585025 pod_ready.go:103] pod "coredns-7c65d6cfc9-j2hr2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:12.220304  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:12.720348  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:13.219553  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:13.720078  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:14.219614  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:14.719625  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:15.220118  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:15.720577  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:16.220392  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:16.719538  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:13.230519  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:32:13.230567  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:16.061543  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:32:16.061583  585929 api_server.go:103] status: https://192.168.50.96:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:32:16.061603  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:16.078424  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:32:16.078457  585929 api_server.go:103] status: https://192.168.50.96:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:32:16.227852  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:16.553664  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:16.553705  585929 api_server.go:103] status: https://192.168.50.96:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:16.728155  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:16.734800  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:16.734853  585929 api_server.go:103] status: https://192.168.50.96:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:17.228013  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:17.233541  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:17.233577  585929 api_server.go:103] status: https://192.168.50.96:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:17.727878  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:17.736731  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 200:
	ok
	I1205 20:32:17.746474  585929 api_server.go:141] control plane version: v1.31.2
	I1205 20:32:17.746511  585929 api_server.go:131] duration metric: took 41.019245279s to wait for apiserver health ...
	I1205 20:32:17.746523  585929 cni.go:84] Creating CNI manager for ""
	I1205 20:32:17.746531  585929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:32:17.748464  585929 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:32:17.750113  585929 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:32:17.762750  585929 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:32:17.786421  585929 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:32:17.826859  585929 system_pods.go:59] 8 kube-system pods found
	I1205 20:32:17.826918  585929 system_pods.go:61] "coredns-7c65d6cfc9-5drgc" [4adbcbc8-0974-4ed3-90d4-fc7f75ff83b6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:32:17.826934  585929 system_pods.go:61] "etcd-default-k8s-diff-port-942599" [4041a965-abf4-45b3-a180-118601e72573] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:32:17.826946  585929 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-942599" [ae1d7788-4feb-4e02-b0b2-bcaff984ff99] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:32:17.826959  585929 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-942599" [5cfb734e-5a10-4066-95a1-b884817a0aea] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:32:17.826969  585929 system_pods.go:61] "kube-proxy-5vdcq" [be2e18fd-6980-45c9-87a4-f6d1ed31bf7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:32:17.826980  585929 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-942599" [8deda727-a6c3-4523-8755-76217f6a8ddb] Running
	I1205 20:32:17.826989  585929 system_pods.go:61] "metrics-server-6867b74b74-rq8xm" [99b577fd-fbfd-4178-8b06-ef96f118c30b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:32:17.827000  585929 system_pods.go:61] "storage-provisioner" [8a858ec2-dc10-4501-8efa-72e2ea0c7927] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:32:17.827010  585929 system_pods.go:74] duration metric: took 40.565274ms to wait for pod list to return data ...
	I1205 20:32:17.827025  585929 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:32:17.838000  585929 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:32:17.838034  585929 node_conditions.go:123] node cpu capacity is 2
	I1205 20:32:17.838050  585929 node_conditions.go:105] duration metric: took 11.010352ms to run NodePressure ...
	I1205 20:32:17.838075  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:18.215713  585929 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 20:32:18.222162  585929 kubeadm.go:739] kubelet initialised
	I1205 20:32:18.222187  585929 kubeadm.go:740] duration metric: took 6.444578ms waiting for restarted kubelet to initialise ...
	I1205 20:32:18.222199  585929 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:32:18.226988  585929 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:18.235570  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.235600  585929 pod_ready.go:82] duration metric: took 8.582972ms for pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:18.235609  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.235617  585929 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:18.242596  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.242623  585929 pod_ready.go:82] duration metric: took 6.99814ms for pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:18.242634  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.242642  585929 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:18.248351  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.248373  585929 pod_ready.go:82] duration metric: took 5.725371ms for pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:18.248383  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.248390  585929 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:18.258151  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.258174  585929 pod_ready.go:82] duration metric: took 9.778119ms for pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:18.258183  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.258190  585929 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5vdcq" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:18.619579  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "kube-proxy-5vdcq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.619623  585929 pod_ready.go:82] duration metric: took 361.426091ms for pod "kube-proxy-5vdcq" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:18.619638  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "kube-proxy-5vdcq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.619649  585929 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:19.019623  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:19.019655  585929 pod_ready.go:82] duration metric: took 399.997558ms for pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:19.019669  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:19.019676  585929 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:19.420201  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:19.420228  585929 pod_ready.go:82] duration metric: took 400.54576ms for pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:19.420242  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:19.420251  585929 pod_ready.go:39] duration metric: took 1.198040831s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:32:19.420292  585929 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:32:19.434385  585929 ops.go:34] apiserver oom_adj: -16
	I1205 20:32:19.434420  585929 kubeadm.go:597] duration metric: took 45.406934122s to restartPrimaryControlPlane
	I1205 20:32:19.434434  585929 kubeadm.go:394] duration metric: took 45.464483994s to StartCluster
	I1205 20:32:19.434460  585929 settings.go:142] acquiring lock: {Name:mk53b9e6d652790a330d8f10370186624dd74692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:32:19.434560  585929 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:32:19.436299  585929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:32:19.436590  585929 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.96 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:32:19.436736  585929 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 20:32:19.436837  585929 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-942599"
	I1205 20:32:19.436858  585929 config.go:182] Loaded profile config "default-k8s-diff-port-942599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:32:19.436873  585929 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-942599"
	W1205 20:32:19.436883  585929 addons.go:243] addon storage-provisioner should already be in state true
	I1205 20:32:19.436923  585929 host.go:66] Checking if "default-k8s-diff-port-942599" exists ...
	I1205 20:32:19.436938  585929 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-942599"
	I1205 20:32:19.436974  585929 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-942599"
	I1205 20:32:19.436922  585929 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-942599"
	I1205 20:32:19.437024  585929 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-942599"
	W1205 20:32:19.437051  585929 addons.go:243] addon metrics-server should already be in state true
	I1205 20:32:19.437090  585929 host.go:66] Checking if "default-k8s-diff-port-942599" exists ...
	I1205 20:32:19.437365  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.437407  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.437452  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.437480  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.437509  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.437514  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.438584  585929 out.go:177] * Verifying Kubernetes components...
	I1205 20:32:19.440376  585929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:32:19.453761  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I1205 20:32:19.453782  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44087
	I1205 20:32:19.453767  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33855
	I1205 20:32:19.454289  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.454441  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.454451  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.454851  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.454871  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.454981  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.454981  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.455005  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.455021  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.455286  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.455350  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.455409  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.455461  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetState
	I1205 20:32:19.455910  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.455927  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.455958  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.455966  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.458587  585929 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-942599"
	W1205 20:32:19.458605  585929 addons.go:243] addon default-storageclass should already be in state true
	I1205 20:32:19.458627  585929 host.go:66] Checking if "default-k8s-diff-port-942599" exists ...
	I1205 20:32:19.458955  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.458995  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.472175  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37545
	I1205 20:32:19.472667  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.472927  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37223
	I1205 20:32:19.473215  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.473233  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.473401  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40929
	I1205 20:32:19.473570  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.473608  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.473839  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.473933  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetState
	I1205 20:32:19.474155  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.474187  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.474290  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.474313  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.474546  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.474638  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.474711  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetState
	I1205 20:32:19.475267  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.475320  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.476105  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:32:19.476447  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:32:19.478117  585929 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:32:19.478117  585929 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:32:17.545165  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:20.044285  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:17.079986  585025 pod_ready.go:93] pod "coredns-7c65d6cfc9-j2hr2" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:17.080014  585025 pod_ready.go:82] duration metric: took 4.508210865s for pod "coredns-7c65d6cfc9-j2hr2" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:17.080025  585025 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:19.086070  585025 pod_ready.go:103] pod "etcd-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:20.587742  585025 pod_ready.go:93] pod "etcd-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:20.587775  585025 pod_ready.go:82] duration metric: took 3.507742173s for pod "etcd-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:20.587789  585025 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:19.479638  585929 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:32:19.479658  585929 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:32:19.479686  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:32:19.479719  585929 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:32:19.479737  585929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:32:19.479750  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:32:19.483208  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.483350  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.483773  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:32:19.483790  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.483873  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:32:19.483887  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.483936  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:32:19.484123  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:32:19.484166  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:32:19.484294  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:32:19.484324  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:32:19.484438  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:32:19.484456  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:32:19.484571  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:32:19.533651  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34539
	I1205 20:32:19.534273  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.534802  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.534833  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.535282  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.535535  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetState
	I1205 20:32:19.538221  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:32:19.538787  585929 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:32:19.538804  585929 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:32:19.538825  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:32:19.541876  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.542318  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:32:19.542354  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.542556  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:32:19.542744  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:32:19.542944  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:32:19.543129  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:32:19.630282  585929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:32:19.652591  585929 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-942599" to be "Ready" ...
	I1205 20:32:19.719058  585929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:32:19.810931  585929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:32:19.812113  585929 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:32:19.812136  585929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:32:19.875725  585929 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:32:19.875761  585929 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:32:19.946353  585929 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:32:19.946390  585929 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:32:20.010445  585929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:32:20.231055  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:20.231082  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:20.231425  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:20.231454  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:20.231469  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:20.231478  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:20.231476  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Closing plugin on server side
	I1205 20:32:20.231764  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:20.231784  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:20.231783  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Closing plugin on server side
	I1205 20:32:20.247021  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:20.247051  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:20.247463  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:20.247490  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:20.247488  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Closing plugin on server side
	I1205 20:32:21.074948  585929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.263976727s)
	I1205 20:32:21.075015  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:21.075029  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:21.075397  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:21.075438  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:21.075449  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:21.075457  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:21.075745  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Closing plugin on server side
	I1205 20:32:21.075766  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:21.075785  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:21.134215  585929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.123724822s)
	I1205 20:32:21.134271  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:21.134285  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:21.134588  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:21.134604  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:21.134612  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:21.134615  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Closing plugin on server side
	I1205 20:32:21.134620  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:21.134878  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:21.134891  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:21.134904  585929 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-942599"
	I1205 20:32:21.136817  585929 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:32:17.220437  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:17.220539  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:17.272666  585602 cri.go:89] found id: ""
	I1205 20:32:17.272702  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.272716  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:17.272723  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:17.272797  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:17.314947  585602 cri.go:89] found id: ""
	I1205 20:32:17.314977  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.314989  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:17.314996  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:17.315061  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:17.354511  585602 cri.go:89] found id: ""
	I1205 20:32:17.354548  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.354561  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:17.354571  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:17.354640  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:17.393711  585602 cri.go:89] found id: ""
	I1205 20:32:17.393745  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.393759  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:17.393768  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:17.393836  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:17.434493  585602 cri.go:89] found id: ""
	I1205 20:32:17.434526  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.434535  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:17.434541  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:17.434602  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:17.476201  585602 cri.go:89] found id: ""
	I1205 20:32:17.476235  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.476245  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:17.476253  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:17.476341  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:17.516709  585602 cri.go:89] found id: ""
	I1205 20:32:17.516745  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.516755  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:17.516762  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:17.516818  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:17.557270  585602 cri.go:89] found id: ""
	I1205 20:32:17.557305  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.557314  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:17.557324  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:17.557348  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:17.606494  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:17.606540  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:17.681372  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:17.681412  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:17.696778  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:17.696816  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:17.839655  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:17.839679  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:17.839717  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:20.423552  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:20.439794  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:20.439875  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:20.482820  585602 cri.go:89] found id: ""
	I1205 20:32:20.482866  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.482880  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:20.482888  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:20.482958  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:20.523590  585602 cri.go:89] found id: ""
	I1205 20:32:20.523629  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.523641  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:20.523649  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:20.523727  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:20.601603  585602 cri.go:89] found id: ""
	I1205 20:32:20.601638  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.601648  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:20.601656  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:20.601728  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:20.643927  585602 cri.go:89] found id: ""
	I1205 20:32:20.643959  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.643972  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:20.643981  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:20.644054  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:20.690935  585602 cri.go:89] found id: ""
	I1205 20:32:20.690964  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.690975  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:20.690984  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:20.691054  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:20.728367  585602 cri.go:89] found id: ""
	I1205 20:32:20.728400  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.728412  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:20.728420  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:20.728489  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:20.766529  585602 cri.go:89] found id: ""
	I1205 20:32:20.766562  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.766571  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:20.766578  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:20.766657  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:20.805641  585602 cri.go:89] found id: ""
	I1205 20:32:20.805680  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.805690  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:20.805701  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:20.805718  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:20.884460  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:20.884495  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:20.884514  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:20.998367  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:20.998429  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:21.041210  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:21.041247  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:21.103519  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:21.103557  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:21.138175  585929 addons.go:510] duration metric: took 1.701453382s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:32:21.657269  585929 node_ready.go:53] node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:22.541880  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:24.543481  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:22.595422  585025 pod_ready.go:103] pod "kube-apiserver-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:23.594392  585025 pod_ready.go:93] pod "kube-apiserver-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:23.594419  585025 pod_ready.go:82] duration metric: took 3.006622534s for pod "kube-apiserver-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:23.594430  585025 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:25.601616  585025 pod_ready.go:103] pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:23.619187  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:23.633782  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:23.633872  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:23.679994  585602 cri.go:89] found id: ""
	I1205 20:32:23.680023  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.680032  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:23.680038  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:23.680094  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:23.718362  585602 cri.go:89] found id: ""
	I1205 20:32:23.718425  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.718439  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:23.718447  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:23.718520  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:23.758457  585602 cri.go:89] found id: ""
	I1205 20:32:23.758491  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.758500  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:23.758506  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:23.758558  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:23.794612  585602 cri.go:89] found id: ""
	I1205 20:32:23.794649  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.794662  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:23.794671  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:23.794738  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:23.832309  585602 cri.go:89] found id: ""
	I1205 20:32:23.832341  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.832354  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:23.832361  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:23.832421  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:23.868441  585602 cri.go:89] found id: ""
	I1205 20:32:23.868472  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.868484  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:23.868492  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:23.868573  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:23.902996  585602 cri.go:89] found id: ""
	I1205 20:32:23.903025  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.903036  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:23.903050  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:23.903115  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:23.939830  585602 cri.go:89] found id: ""
	I1205 20:32:23.939865  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.939879  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:23.939892  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:23.939909  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:23.992310  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:23.992354  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:24.007378  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:24.007414  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:24.077567  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:24.077594  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:24.077608  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:24.165120  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:24.165163  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:26.711674  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:26.726923  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:26.727008  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:26.763519  585602 cri.go:89] found id: ""
	I1205 20:32:26.763554  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.763563  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:26.763570  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:26.763628  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:26.802600  585602 cri.go:89] found id: ""
	I1205 20:32:26.802635  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.802644  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:26.802650  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:26.802705  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:26.839920  585602 cri.go:89] found id: ""
	I1205 20:32:26.839967  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.839981  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:26.839989  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:26.840076  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:24.157515  585929 node_ready.go:53] node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:26.657197  585929 node_ready.go:53] node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:27.656811  585929 node_ready.go:49] node "default-k8s-diff-port-942599" has status "Ready":"True"
	I1205 20:32:27.656842  585929 node_ready.go:38] duration metric: took 8.004215314s for node "default-k8s-diff-port-942599" to be "Ready" ...
	I1205 20:32:27.656854  585929 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:32:27.662792  585929 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.668485  585929 pod_ready.go:93] pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:27.668510  585929 pod_ready.go:82] duration metric: took 5.690516ms for pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.668521  585929 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:26.543536  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:28.544214  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:27.101514  585025 pod_ready.go:93] pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:27.101540  585025 pod_ready.go:82] duration metric: took 3.507102769s for pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.101551  585025 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rjp4j" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.108084  585025 pod_ready.go:93] pod "kube-proxy-rjp4j" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:27.108116  585025 pod_ready.go:82] duration metric: took 6.557141ms for pod "kube-proxy-rjp4j" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.108131  585025 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.112915  585025 pod_ready.go:93] pod "kube-scheduler-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:27.112942  585025 pod_ready.go:82] duration metric: took 4.801285ms for pod "kube-scheduler-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.112955  585025 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.119094  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:26.876377  585602 cri.go:89] found id: ""
	I1205 20:32:26.876406  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.876416  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:26.876422  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:26.876491  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:26.913817  585602 cri.go:89] found id: ""
	I1205 20:32:26.913845  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.913854  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:26.913862  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:26.913936  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:26.955739  585602 cri.go:89] found id: ""
	I1205 20:32:26.955775  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.955788  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:26.955798  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:26.955863  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:26.996191  585602 cri.go:89] found id: ""
	I1205 20:32:26.996223  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.996234  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:26.996242  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:26.996341  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:27.040905  585602 cri.go:89] found id: ""
	I1205 20:32:27.040935  585602 logs.go:282] 0 containers: []
	W1205 20:32:27.040947  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:27.040958  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:27.040973  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:27.098103  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:27.098140  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:27.116538  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:27.116574  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:27.204154  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:27.204187  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:27.204208  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:27.300380  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:27.300431  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:29.840944  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:29.855784  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:29.855869  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:29.893728  585602 cri.go:89] found id: ""
	I1205 20:32:29.893765  585602 logs.go:282] 0 containers: []
	W1205 20:32:29.893777  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:29.893786  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:29.893867  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:29.930138  585602 cri.go:89] found id: ""
	I1205 20:32:29.930176  585602 logs.go:282] 0 containers: []
	W1205 20:32:29.930186  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:29.930193  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:29.930248  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:29.966340  585602 cri.go:89] found id: ""
	I1205 20:32:29.966371  585602 logs.go:282] 0 containers: []
	W1205 20:32:29.966380  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:29.966387  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:29.966463  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:30.003868  585602 cri.go:89] found id: ""
	I1205 20:32:30.003900  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.003920  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:30.003928  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:30.004001  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:30.044332  585602 cri.go:89] found id: ""
	I1205 20:32:30.044363  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.044373  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:30.044380  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:30.044445  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:30.088044  585602 cri.go:89] found id: ""
	I1205 20:32:30.088085  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.088098  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:30.088106  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:30.088173  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:30.124221  585602 cri.go:89] found id: ""
	I1205 20:32:30.124248  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.124258  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:30.124285  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:30.124357  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:30.162092  585602 cri.go:89] found id: ""
	I1205 20:32:30.162121  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.162133  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:30.162146  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:30.162162  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:30.218526  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:30.218567  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:30.232240  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:30.232292  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:30.308228  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:30.308260  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:30.308296  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:30.389348  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:30.389391  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:29.177093  585929 pod_ready.go:93] pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:29.177118  585929 pod_ready.go:82] duration metric: took 1.508590352s for pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.177129  585929 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.185839  585929 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:29.185869  585929 pod_ready.go:82] duration metric: took 8.733028ms for pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.185883  585929 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.191924  585929 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:29.191950  585929 pod_ready.go:82] duration metric: took 6.059525ms for pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.191963  585929 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5vdcq" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.256484  585929 pod_ready.go:93] pod "kube-proxy-5vdcq" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:29.256510  585929 pod_ready.go:82] duration metric: took 64.540117ms for pod "kube-proxy-5vdcq" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.256521  585929 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.656933  585929 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:29.656961  585929 pod_ready.go:82] duration metric: took 400.432279ms for pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.656972  585929 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:31.664326  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:31.043630  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:33.044035  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:35.542861  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:31.120200  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:33.120303  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:35.120532  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:32.934497  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:32.949404  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:32.949488  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:33.006117  585602 cri.go:89] found id: ""
	I1205 20:32:33.006148  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.006157  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:33.006163  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:33.006231  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:33.064907  585602 cri.go:89] found id: ""
	I1205 20:32:33.064945  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.064958  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:33.064966  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:33.065031  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:33.101268  585602 cri.go:89] found id: ""
	I1205 20:32:33.101295  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.101304  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:33.101310  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:33.101378  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:33.141705  585602 cri.go:89] found id: ""
	I1205 20:32:33.141733  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.141743  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:33.141750  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:33.141810  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:33.180983  585602 cri.go:89] found id: ""
	I1205 20:32:33.181011  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.181020  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:33.181026  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:33.181086  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:33.220742  585602 cri.go:89] found id: ""
	I1205 20:32:33.220779  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.220791  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:33.220799  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:33.220871  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:33.255980  585602 cri.go:89] found id: ""
	I1205 20:32:33.256009  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.256017  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:33.256024  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:33.256080  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:33.292978  585602 cri.go:89] found id: ""
	I1205 20:32:33.293005  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.293013  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:33.293023  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:33.293034  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:33.347167  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:33.347213  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:33.361367  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:33.361408  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:33.435871  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:33.435915  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:33.435932  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:33.518835  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:33.518880  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:36.066359  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:36.080867  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:36.080947  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:36.117647  585602 cri.go:89] found id: ""
	I1205 20:32:36.117678  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.117689  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:36.117697  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:36.117763  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:36.154376  585602 cri.go:89] found id: ""
	I1205 20:32:36.154412  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.154428  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:36.154436  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:36.154498  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:36.193225  585602 cri.go:89] found id: ""
	I1205 20:32:36.193261  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.193274  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:36.193282  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:36.193347  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:36.230717  585602 cri.go:89] found id: ""
	I1205 20:32:36.230748  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.230758  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:36.230764  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:36.230817  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:36.270186  585602 cri.go:89] found id: ""
	I1205 20:32:36.270238  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.270252  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:36.270262  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:36.270340  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:36.306378  585602 cri.go:89] found id: ""
	I1205 20:32:36.306425  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.306438  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:36.306447  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:36.306531  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:36.342256  585602 cri.go:89] found id: ""
	I1205 20:32:36.342289  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.342300  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:36.342306  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:36.342380  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:36.380684  585602 cri.go:89] found id: ""
	I1205 20:32:36.380718  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.380732  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:36.380745  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:36.380768  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:36.436066  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:36.436109  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:36.450255  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:36.450285  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:36.521857  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:36.521883  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:36.521897  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:36.608349  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:36.608395  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:34.163870  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:36.164890  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:38.042889  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:40.543140  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:37.619863  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:40.120462  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:39.157366  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:39.171267  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:39.171357  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:39.214459  585602 cri.go:89] found id: ""
	I1205 20:32:39.214490  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.214520  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:39.214528  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:39.214583  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:39.250312  585602 cri.go:89] found id: ""
	I1205 20:32:39.250352  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.250366  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:39.250375  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:39.250437  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:39.286891  585602 cri.go:89] found id: ""
	I1205 20:32:39.286932  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.286944  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:39.286952  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:39.287019  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:39.323923  585602 cri.go:89] found id: ""
	I1205 20:32:39.323958  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.323970  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:39.323979  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:39.324053  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:39.360280  585602 cri.go:89] found id: ""
	I1205 20:32:39.360322  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.360331  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:39.360337  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:39.360403  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:39.397599  585602 cri.go:89] found id: ""
	I1205 20:32:39.397637  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.397650  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:39.397659  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:39.397731  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:39.435132  585602 cri.go:89] found id: ""
	I1205 20:32:39.435159  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.435168  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:39.435174  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:39.435241  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:39.470653  585602 cri.go:89] found id: ""
	I1205 20:32:39.470682  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.470690  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:39.470700  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:39.470714  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:39.511382  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:39.511413  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:39.563955  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:39.563994  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:39.578015  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:39.578044  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:39.658505  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:39.658535  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:39.658550  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:38.665320  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:41.165054  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:42.545231  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:45.042231  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:42.620687  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:45.120915  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:42.248607  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:42.263605  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:42.263688  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:42.305480  585602 cri.go:89] found id: ""
	I1205 20:32:42.305508  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.305519  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:42.305527  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:42.305595  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:42.339969  585602 cri.go:89] found id: ""
	I1205 20:32:42.340001  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.340010  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:42.340016  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:42.340090  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:42.381594  585602 cri.go:89] found id: ""
	I1205 20:32:42.381630  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.381643  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:42.381651  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:42.381771  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:42.435039  585602 cri.go:89] found id: ""
	I1205 20:32:42.435072  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.435085  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:42.435093  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:42.435162  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:42.470567  585602 cri.go:89] found id: ""
	I1205 20:32:42.470595  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.470604  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:42.470610  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:42.470674  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:42.510695  585602 cri.go:89] found id: ""
	I1205 20:32:42.510723  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.510731  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:42.510738  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:42.510793  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:42.547687  585602 cri.go:89] found id: ""
	I1205 20:32:42.547711  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.547718  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:42.547735  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:42.547784  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:42.587160  585602 cri.go:89] found id: ""
	I1205 20:32:42.587191  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.587199  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:42.587211  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:42.587225  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:42.669543  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:42.669587  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:42.717795  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:42.717833  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:42.772644  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:42.772696  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:42.788443  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:42.788480  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:42.861560  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:45.362758  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:45.377178  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:45.377266  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:45.413055  585602 cri.go:89] found id: ""
	I1205 20:32:45.413088  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.413102  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:45.413111  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:45.413176  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:45.453769  585602 cri.go:89] found id: ""
	I1205 20:32:45.453799  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.453808  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:45.453813  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:45.453879  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:45.499481  585602 cri.go:89] found id: ""
	I1205 20:32:45.499511  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.499522  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:45.499531  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:45.499598  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:45.537603  585602 cri.go:89] found id: ""
	I1205 20:32:45.537638  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.537647  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:45.537653  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:45.537707  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:45.572430  585602 cri.go:89] found id: ""
	I1205 20:32:45.572463  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.572471  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:45.572479  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:45.572556  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:45.610349  585602 cri.go:89] found id: ""
	I1205 20:32:45.610387  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.610398  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:45.610406  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:45.610476  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:45.649983  585602 cri.go:89] found id: ""
	I1205 20:32:45.650018  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.650031  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:45.650038  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:45.650113  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:45.689068  585602 cri.go:89] found id: ""
	I1205 20:32:45.689099  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.689107  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:45.689118  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:45.689131  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:45.743715  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:45.743758  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:45.759803  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:45.759834  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:45.835107  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:45.835133  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:45.835146  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:45.914590  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:45.914632  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:43.665616  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:46.164064  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:47.045269  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:49.544519  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:47.619099  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:49.627948  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:48.456633  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:48.475011  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:48.475086  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:48.512878  585602 cri.go:89] found id: ""
	I1205 20:32:48.512913  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.512925  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:48.512933  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:48.513002  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:48.551708  585602 cri.go:89] found id: ""
	I1205 20:32:48.551737  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.551744  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:48.551751  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:48.551805  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:48.590765  585602 cri.go:89] found id: ""
	I1205 20:32:48.590791  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.590800  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:48.590806  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:48.590859  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:48.629447  585602 cri.go:89] found id: ""
	I1205 20:32:48.629473  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.629481  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:48.629487  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:48.629540  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:48.667299  585602 cri.go:89] found id: ""
	I1205 20:32:48.667329  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.667339  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:48.667347  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:48.667414  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:48.703771  585602 cri.go:89] found id: ""
	I1205 20:32:48.703816  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.703830  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:48.703841  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:48.703911  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:48.747064  585602 cri.go:89] found id: ""
	I1205 20:32:48.747098  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.747111  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:48.747118  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:48.747186  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:48.786608  585602 cri.go:89] found id: ""
	I1205 20:32:48.786649  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.786663  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:48.786684  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:48.786700  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:48.860834  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:48.860866  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:48.860881  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:48.944029  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:48.944082  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:48.982249  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:48.982284  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:49.036460  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:49.036509  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:51.556456  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:51.571498  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:51.571590  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:51.616890  585602 cri.go:89] found id: ""
	I1205 20:32:51.616924  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.616934  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:51.616942  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:51.617008  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:51.660397  585602 cri.go:89] found id: ""
	I1205 20:32:51.660433  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.660445  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:51.660453  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:51.660543  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:51.698943  585602 cri.go:89] found id: ""
	I1205 20:32:51.698973  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.698981  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:51.698988  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:51.699041  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:51.737254  585602 cri.go:89] found id: ""
	I1205 20:32:51.737288  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.737297  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:51.737310  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:51.737366  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:51.775560  585602 cri.go:89] found id: ""
	I1205 20:32:51.775592  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.775600  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:51.775606  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:51.775681  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:51.814314  585602 cri.go:89] found id: ""
	I1205 20:32:51.814370  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.814383  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:51.814393  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:51.814464  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:51.849873  585602 cri.go:89] found id: ""
	I1205 20:32:51.849913  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.849935  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:51.849944  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:51.850018  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:48.164562  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:50.664498  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:52.044224  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:54.542721  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:52.118857  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:54.120231  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:51.891360  585602 cri.go:89] found id: ""
	I1205 20:32:51.891388  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.891400  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:51.891412  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:51.891429  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:51.943812  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:51.943854  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:51.959119  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:51.959152  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:52.036014  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:52.036040  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:52.036059  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:52.114080  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:52.114122  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:54.657243  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:54.672319  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:54.672407  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:54.708446  585602 cri.go:89] found id: ""
	I1205 20:32:54.708475  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.708484  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:54.708491  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:54.708569  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:54.747309  585602 cri.go:89] found id: ""
	I1205 20:32:54.747347  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.747359  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:54.747370  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:54.747451  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:54.790742  585602 cri.go:89] found id: ""
	I1205 20:32:54.790772  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.790781  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:54.790787  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:54.790853  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:54.828857  585602 cri.go:89] found id: ""
	I1205 20:32:54.828885  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.828894  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:54.828902  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:54.828964  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:54.867691  585602 cri.go:89] found id: ""
	I1205 20:32:54.867729  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.867740  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:54.867747  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:54.867819  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:54.907216  585602 cri.go:89] found id: ""
	I1205 20:32:54.907242  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.907249  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:54.907256  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:54.907308  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:54.945800  585602 cri.go:89] found id: ""
	I1205 20:32:54.945827  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.945837  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:54.945844  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:54.945895  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:54.993176  585602 cri.go:89] found id: ""
	I1205 20:32:54.993216  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.993228  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:54.993242  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:54.993258  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:55.045797  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:55.045835  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:55.060103  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:55.060136  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:55.129440  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:55.129467  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:55.129485  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:55.214949  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:55.214999  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:53.164619  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:55.663605  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:56.543148  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:58.543374  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:00.543687  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:56.620220  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:58.620759  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:00.626643  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:57.755086  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:57.769533  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:57.769622  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:57.807812  585602 cri.go:89] found id: ""
	I1205 20:32:57.807847  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.807858  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:57.807869  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:57.807941  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:57.846179  585602 cri.go:89] found id: ""
	I1205 20:32:57.846209  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.846223  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:57.846232  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:57.846305  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:57.881438  585602 cri.go:89] found id: ""
	I1205 20:32:57.881473  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.881482  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:57.881496  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:57.881553  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:57.918242  585602 cri.go:89] found id: ""
	I1205 20:32:57.918283  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.918294  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:57.918302  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:57.918378  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:57.962825  585602 cri.go:89] found id: ""
	I1205 20:32:57.962863  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.962873  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:57.962879  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:57.962955  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:58.004655  585602 cri.go:89] found id: ""
	I1205 20:32:58.004699  585602 logs.go:282] 0 containers: []
	W1205 20:32:58.004711  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:58.004731  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:58.004802  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:58.043701  585602 cri.go:89] found id: ""
	I1205 20:32:58.043730  585602 logs.go:282] 0 containers: []
	W1205 20:32:58.043738  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:58.043744  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:58.043802  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:58.081400  585602 cri.go:89] found id: ""
	I1205 20:32:58.081437  585602 logs.go:282] 0 containers: []
	W1205 20:32:58.081450  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:58.081463  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:58.081486  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:58.135531  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:58.135573  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:58.149962  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:58.149998  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:58.227810  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:58.227834  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:58.227849  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:58.308173  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:58.308219  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:00.848019  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:00.863423  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:00.863496  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:00.902526  585602 cri.go:89] found id: ""
	I1205 20:33:00.902553  585602 logs.go:282] 0 containers: []
	W1205 20:33:00.902561  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:00.902567  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:00.902621  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:00.939891  585602 cri.go:89] found id: ""
	I1205 20:33:00.939932  585602 logs.go:282] 0 containers: []
	W1205 20:33:00.939942  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:00.939948  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:00.940022  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:00.981645  585602 cri.go:89] found id: ""
	I1205 20:33:00.981676  585602 logs.go:282] 0 containers: []
	W1205 20:33:00.981684  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:00.981691  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:00.981745  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:01.027753  585602 cri.go:89] found id: ""
	I1205 20:33:01.027780  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.027789  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:01.027795  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:01.027877  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:01.064529  585602 cri.go:89] found id: ""
	I1205 20:33:01.064559  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.064567  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:01.064574  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:01.064628  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:01.102239  585602 cri.go:89] found id: ""
	I1205 20:33:01.102272  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.102281  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:01.102287  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:01.102357  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:01.139723  585602 cri.go:89] found id: ""
	I1205 20:33:01.139760  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.139770  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:01.139778  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:01.139845  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:01.176172  585602 cri.go:89] found id: ""
	I1205 20:33:01.176198  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.176207  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:01.176216  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:01.176231  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:01.230085  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:01.230133  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:01.245574  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:01.245617  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:01.340483  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:01.340520  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:01.340537  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:01.416925  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:01.416972  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:58.164852  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:00.664376  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:02.677134  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:03.042415  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:05.543101  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:03.119783  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:05.120647  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:03.958855  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:03.974024  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:03.974096  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:04.021407  585602 cri.go:89] found id: ""
	I1205 20:33:04.021442  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.021451  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:04.021458  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:04.021523  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:04.063385  585602 cri.go:89] found id: ""
	I1205 20:33:04.063414  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.063423  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:04.063430  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:04.063488  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:04.103693  585602 cri.go:89] found id: ""
	I1205 20:33:04.103735  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.103747  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:04.103756  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:04.103815  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:04.143041  585602 cri.go:89] found id: ""
	I1205 20:33:04.143072  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.143100  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:04.143109  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:04.143179  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:04.180668  585602 cri.go:89] found id: ""
	I1205 20:33:04.180702  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.180712  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:04.180718  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:04.180778  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:04.221848  585602 cri.go:89] found id: ""
	I1205 20:33:04.221885  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.221894  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:04.221901  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:04.222018  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:04.263976  585602 cri.go:89] found id: ""
	I1205 20:33:04.264014  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.264024  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:04.264030  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:04.264097  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:04.298698  585602 cri.go:89] found id: ""
	I1205 20:33:04.298726  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.298737  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:04.298751  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:04.298767  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:04.347604  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:04.347659  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:04.361325  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:04.361361  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:04.437679  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:04.437704  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:04.437720  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:04.520043  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:04.520103  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:05.163317  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:07.165936  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:08.043365  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:10.544442  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:07.122134  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:09.620228  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:07.070687  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:07.085290  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:07.085367  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:07.126233  585602 cri.go:89] found id: ""
	I1205 20:33:07.126265  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.126276  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:07.126285  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:07.126346  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:07.163004  585602 cri.go:89] found id: ""
	I1205 20:33:07.163040  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.163053  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:07.163061  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:07.163126  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:07.201372  585602 cri.go:89] found id: ""
	I1205 20:33:07.201412  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.201425  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:07.201435  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:07.201509  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:07.237762  585602 cri.go:89] found id: ""
	I1205 20:33:07.237795  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.237807  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:07.237815  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:07.237885  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:07.273940  585602 cri.go:89] found id: ""
	I1205 20:33:07.273976  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.273985  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:07.273995  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:07.274057  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:07.311028  585602 cri.go:89] found id: ""
	I1205 20:33:07.311061  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.311070  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:07.311076  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:07.311131  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:07.347386  585602 cri.go:89] found id: ""
	I1205 20:33:07.347422  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.347433  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:07.347441  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:07.347503  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:07.386412  585602 cri.go:89] found id: ""
	I1205 20:33:07.386446  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.386458  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:07.386471  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:07.386489  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:07.430250  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:07.430280  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:07.483936  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:07.483982  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:07.498201  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:07.498236  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:07.576741  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:07.576767  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:07.576780  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:10.164792  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:10.178516  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:10.178596  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:10.215658  585602 cri.go:89] found id: ""
	I1205 20:33:10.215692  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.215702  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:10.215711  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:10.215779  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:10.251632  585602 cri.go:89] found id: ""
	I1205 20:33:10.251671  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.251683  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:10.251691  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:10.251763  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:10.295403  585602 cri.go:89] found id: ""
	I1205 20:33:10.295435  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.295453  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:10.295460  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:10.295513  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:10.329747  585602 cri.go:89] found id: ""
	I1205 20:33:10.329778  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.329787  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:10.329793  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:10.329871  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:10.369975  585602 cri.go:89] found id: ""
	I1205 20:33:10.370016  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.370028  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:10.370036  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:10.370104  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:10.408146  585602 cri.go:89] found id: ""
	I1205 20:33:10.408183  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.408196  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:10.408204  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:10.408288  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:10.443803  585602 cri.go:89] found id: ""
	I1205 20:33:10.443839  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.443850  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:10.443858  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:10.443932  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:10.481784  585602 cri.go:89] found id: ""
	I1205 20:33:10.481826  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.481840  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:10.481854  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:10.481872  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:10.531449  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:10.531498  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:10.549258  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:10.549288  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:10.620162  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:10.620189  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:10.620206  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:10.704656  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:10.704706  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:09.663940  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:12.163534  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:13.043720  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:15.542736  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:12.118781  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:14.619996  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:13.251518  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:13.264731  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:13.264815  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:13.297816  585602 cri.go:89] found id: ""
	I1205 20:33:13.297846  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.297855  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:13.297861  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:13.297918  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:13.330696  585602 cri.go:89] found id: ""
	I1205 20:33:13.330724  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.330732  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:13.330738  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:13.330789  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:13.366257  585602 cri.go:89] found id: ""
	I1205 20:33:13.366304  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.366315  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:13.366321  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:13.366385  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:13.403994  585602 cri.go:89] found id: ""
	I1205 20:33:13.404030  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.404042  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:13.404051  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:13.404121  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:13.450160  585602 cri.go:89] found id: ""
	I1205 20:33:13.450189  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.450198  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:13.450205  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:13.450262  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:13.502593  585602 cri.go:89] found id: ""
	I1205 20:33:13.502629  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.502640  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:13.502650  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:13.502720  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:13.548051  585602 cri.go:89] found id: ""
	I1205 20:33:13.548084  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.548095  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:13.548103  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:13.548166  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:13.593913  585602 cri.go:89] found id: ""
	I1205 20:33:13.593947  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.593960  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:13.593975  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:13.593997  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:13.674597  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:13.674628  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:13.674647  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:13.760747  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:13.760796  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:13.804351  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:13.804383  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:13.856896  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:13.856958  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:16.372754  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:16.387165  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:16.387242  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:16.426612  585602 cri.go:89] found id: ""
	I1205 20:33:16.426655  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.426668  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:16.426676  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:16.426734  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:16.461936  585602 cri.go:89] found id: ""
	I1205 20:33:16.461974  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.461988  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:16.461997  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:16.462060  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:16.498010  585602 cri.go:89] found id: ""
	I1205 20:33:16.498044  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.498062  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:16.498069  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:16.498133  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:16.533825  585602 cri.go:89] found id: ""
	I1205 20:33:16.533854  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.533863  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:16.533869  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:16.533941  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:16.570834  585602 cri.go:89] found id: ""
	I1205 20:33:16.570875  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.570887  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:16.570896  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:16.570968  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:16.605988  585602 cri.go:89] found id: ""
	I1205 20:33:16.606026  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.606038  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:16.606047  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:16.606140  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:16.645148  585602 cri.go:89] found id: ""
	I1205 20:33:16.645178  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.645188  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:16.645195  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:16.645261  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:16.682449  585602 cri.go:89] found id: ""
	I1205 20:33:16.682479  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.682491  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:16.682502  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:16.682519  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:16.696944  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:16.696980  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:16.777034  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:16.777064  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:16.777078  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:14.164550  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:16.664527  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:17.543278  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:19.543404  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:16.621517  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:18.626303  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:16.854812  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:16.854880  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:16.905101  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:16.905131  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:19.463427  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:19.477135  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:19.477233  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:19.529213  585602 cri.go:89] found id: ""
	I1205 20:33:19.529248  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.529264  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:19.529274  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:19.529359  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:19.575419  585602 cri.go:89] found id: ""
	I1205 20:33:19.575453  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.575465  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:19.575474  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:19.575546  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:19.616657  585602 cri.go:89] found id: ""
	I1205 20:33:19.616691  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.616704  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:19.616713  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:19.616787  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:19.653142  585602 cri.go:89] found id: ""
	I1205 20:33:19.653177  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.653189  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:19.653198  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:19.653267  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:19.690504  585602 cri.go:89] found id: ""
	I1205 20:33:19.690544  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.690555  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:19.690563  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:19.690635  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:19.730202  585602 cri.go:89] found id: ""
	I1205 20:33:19.730229  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.730237  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:19.730245  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:19.730302  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:19.767212  585602 cri.go:89] found id: ""
	I1205 20:33:19.767243  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.767255  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:19.767264  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:19.767336  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:19.803089  585602 cri.go:89] found id: ""
	I1205 20:33:19.803125  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.803137  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:19.803163  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:19.803180  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:19.884542  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:19.884589  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:19.925257  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:19.925303  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:19.980457  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:19.980510  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:19.997026  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:19.997057  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:20.075062  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:18.664915  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:21.163064  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:22.042272  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:24.043822  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:21.120054  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:23.120944  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:25.618857  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:22.575469  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:22.588686  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:22.588768  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:22.622824  585602 cri.go:89] found id: ""
	I1205 20:33:22.622860  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.622868  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:22.622874  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:22.622931  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:22.659964  585602 cri.go:89] found id: ""
	I1205 20:33:22.660059  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.660074  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:22.660085  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:22.660153  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:22.695289  585602 cri.go:89] found id: ""
	I1205 20:33:22.695325  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.695337  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:22.695345  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:22.695417  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:22.734766  585602 cri.go:89] found id: ""
	I1205 20:33:22.734801  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.734813  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:22.734821  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:22.734896  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:22.773778  585602 cri.go:89] found id: ""
	I1205 20:33:22.773806  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.773818  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:22.773826  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:22.773899  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:22.811468  585602 cri.go:89] found id: ""
	I1205 20:33:22.811503  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.811514  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:22.811521  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:22.811591  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:22.852153  585602 cri.go:89] found id: ""
	I1205 20:33:22.852210  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.852221  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:22.852227  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:22.852318  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:22.888091  585602 cri.go:89] found id: ""
	I1205 20:33:22.888120  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.888129  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:22.888139  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:22.888155  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:22.943210  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:22.943252  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:22.958356  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:22.958393  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:23.026732  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:23.026770  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:23.026788  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:23.106356  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:23.106395  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:25.650832  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:25.665392  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:25.665475  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:25.701109  585602 cri.go:89] found id: ""
	I1205 20:33:25.701146  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.701155  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:25.701162  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:25.701231  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:25.738075  585602 cri.go:89] found id: ""
	I1205 20:33:25.738108  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.738117  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:25.738123  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:25.738176  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:25.775031  585602 cri.go:89] found id: ""
	I1205 20:33:25.775078  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.775090  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:25.775100  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:25.775173  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:25.811343  585602 cri.go:89] found id: ""
	I1205 20:33:25.811376  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.811386  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:25.811395  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:25.811471  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:25.846635  585602 cri.go:89] found id: ""
	I1205 20:33:25.846674  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.846684  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:25.846692  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:25.846766  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:25.881103  585602 cri.go:89] found id: ""
	I1205 20:33:25.881136  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.881145  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:25.881151  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:25.881224  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:25.917809  585602 cri.go:89] found id: ""
	I1205 20:33:25.917844  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.917855  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:25.917864  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:25.917936  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:25.955219  585602 cri.go:89] found id: ""
	I1205 20:33:25.955245  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.955254  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:25.955264  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:25.955276  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:26.007016  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:26.007059  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:26.021554  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:26.021601  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:26.099290  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:26.099321  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:26.099334  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:26.182955  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:26.182993  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:23.164876  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:25.665151  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:26.542519  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:28.542856  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:30.542941  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:27.621687  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:30.119140  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:28.725201  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:28.739515  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:28.739602  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:28.778187  585602 cri.go:89] found id: ""
	I1205 20:33:28.778230  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.778242  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:28.778249  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:28.778315  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:28.815788  585602 cri.go:89] found id: ""
	I1205 20:33:28.815826  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.815838  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:28.815845  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:28.815912  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:28.852222  585602 cri.go:89] found id: ""
	I1205 20:33:28.852251  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.852261  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:28.852289  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:28.852362  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:28.889742  585602 cri.go:89] found id: ""
	I1205 20:33:28.889776  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.889787  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:28.889794  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:28.889859  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:28.926872  585602 cri.go:89] found id: ""
	I1205 20:33:28.926903  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.926912  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:28.926919  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:28.926972  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:28.963380  585602 cri.go:89] found id: ""
	I1205 20:33:28.963418  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.963432  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:28.963441  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:28.963509  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:29.000711  585602 cri.go:89] found id: ""
	I1205 20:33:29.000746  585602 logs.go:282] 0 containers: []
	W1205 20:33:29.000764  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:29.000772  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:29.000848  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:29.035934  585602 cri.go:89] found id: ""
	I1205 20:33:29.035963  585602 logs.go:282] 0 containers: []
	W1205 20:33:29.035974  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:29.035987  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:29.036003  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:29.091336  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:29.091382  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:29.105784  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:29.105814  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:29.182038  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:29.182078  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:29.182095  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:29.261107  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:29.261153  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:31.802911  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:31.817285  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:31.817369  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:28.164470  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:30.664154  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:33.043654  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:35.044730  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:32.120759  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:34.619618  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:31.854865  585602 cri.go:89] found id: ""
	I1205 20:33:31.854900  585602 logs.go:282] 0 containers: []
	W1205 20:33:31.854914  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:31.854922  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:31.854995  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:31.893928  585602 cri.go:89] found id: ""
	I1205 20:33:31.893964  585602 logs.go:282] 0 containers: []
	W1205 20:33:31.893977  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:31.893984  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:31.894053  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:31.929490  585602 cri.go:89] found id: ""
	I1205 20:33:31.929527  585602 logs.go:282] 0 containers: []
	W1205 20:33:31.929540  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:31.929548  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:31.929637  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:31.964185  585602 cri.go:89] found id: ""
	I1205 20:33:31.964211  585602 logs.go:282] 0 containers: []
	W1205 20:33:31.964219  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:31.964225  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:31.964291  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:32.002708  585602 cri.go:89] found id: ""
	I1205 20:33:32.002748  585602 logs.go:282] 0 containers: []
	W1205 20:33:32.002760  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:32.002768  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:32.002847  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:32.040619  585602 cri.go:89] found id: ""
	I1205 20:33:32.040712  585602 logs.go:282] 0 containers: []
	W1205 20:33:32.040740  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:32.040758  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:32.040839  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:32.079352  585602 cri.go:89] found id: ""
	I1205 20:33:32.079390  585602 logs.go:282] 0 containers: []
	W1205 20:33:32.079404  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:32.079412  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:32.079484  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:32.117560  585602 cri.go:89] found id: ""
	I1205 20:33:32.117596  585602 logs.go:282] 0 containers: []
	W1205 20:33:32.117608  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:32.117629  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:32.117653  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:32.172639  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:32.172686  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:32.187687  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:32.187727  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:32.265000  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:32.265034  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:32.265051  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:32.348128  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:32.348176  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:34.890144  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:34.903953  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:34.904032  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:34.939343  585602 cri.go:89] found id: ""
	I1205 20:33:34.939374  585602 logs.go:282] 0 containers: []
	W1205 20:33:34.939383  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:34.939389  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:34.939444  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:34.978225  585602 cri.go:89] found id: ""
	I1205 20:33:34.978266  585602 logs.go:282] 0 containers: []
	W1205 20:33:34.978278  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:34.978286  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:34.978363  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:35.015918  585602 cri.go:89] found id: ""
	I1205 20:33:35.015950  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.015960  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:35.015966  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:35.016032  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:35.053222  585602 cri.go:89] found id: ""
	I1205 20:33:35.053249  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.053257  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:35.053264  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:35.053320  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:35.088369  585602 cri.go:89] found id: ""
	I1205 20:33:35.088401  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.088412  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:35.088421  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:35.088498  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:35.135290  585602 cri.go:89] found id: ""
	I1205 20:33:35.135327  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.135338  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:35.135346  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:35.135412  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:35.174959  585602 cri.go:89] found id: ""
	I1205 20:33:35.174996  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.175008  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:35.175017  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:35.175097  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:35.215101  585602 cri.go:89] found id: ""
	I1205 20:33:35.215134  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.215143  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:35.215152  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:35.215167  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:35.269372  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:35.269414  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:35.285745  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:35.285776  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:35.364774  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:35.364807  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:35.364824  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:35.445932  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:35.445980  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:33.163790  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:35.163966  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:37.164819  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:37.047128  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:39.543051  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:36.620450  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:39.120055  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:37.996837  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:38.010545  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:38.010612  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:38.048292  585602 cri.go:89] found id: ""
	I1205 20:33:38.048334  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.048350  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:38.048360  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:38.048429  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:38.086877  585602 cri.go:89] found id: ""
	I1205 20:33:38.086911  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.086921  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:38.086927  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:38.087001  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:38.122968  585602 cri.go:89] found id: ""
	I1205 20:33:38.122999  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.123010  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:38.123018  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:38.123082  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:38.164901  585602 cri.go:89] found id: ""
	I1205 20:33:38.164940  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.164949  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:38.164955  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:38.165006  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:38.200697  585602 cri.go:89] found id: ""
	I1205 20:33:38.200725  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.200734  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:38.200740  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:38.200803  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:38.240306  585602 cri.go:89] found id: ""
	I1205 20:33:38.240338  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.240347  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:38.240354  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:38.240424  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:38.275788  585602 cri.go:89] found id: ""
	I1205 20:33:38.275823  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.275835  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:38.275844  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:38.275917  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:38.311431  585602 cri.go:89] found id: ""
	I1205 20:33:38.311468  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.311480  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:38.311493  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:38.311507  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:38.361472  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:38.361515  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:38.375970  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:38.376004  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:38.450913  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:38.450941  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:38.450961  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:38.527620  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:38.527666  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:41.072438  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:41.086085  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:41.086168  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:41.123822  585602 cri.go:89] found id: ""
	I1205 20:33:41.123852  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.123861  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:41.123868  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:41.123919  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:41.160343  585602 cri.go:89] found id: ""
	I1205 20:33:41.160371  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.160380  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:41.160389  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:41.160457  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:41.198212  585602 cri.go:89] found id: ""
	I1205 20:33:41.198240  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.198249  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:41.198255  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:41.198309  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:41.233793  585602 cri.go:89] found id: ""
	I1205 20:33:41.233824  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.233832  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:41.233838  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:41.233890  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:41.269397  585602 cri.go:89] found id: ""
	I1205 20:33:41.269435  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.269447  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:41.269457  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:41.269529  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:41.303079  585602 cri.go:89] found id: ""
	I1205 20:33:41.303116  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.303128  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:41.303136  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:41.303196  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:41.337784  585602 cri.go:89] found id: ""
	I1205 20:33:41.337817  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.337826  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:41.337832  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:41.337901  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:41.371410  585602 cri.go:89] found id: ""
	I1205 20:33:41.371438  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.371446  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:41.371456  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:41.371467  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:41.422768  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:41.422807  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:41.437427  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:41.437461  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:41.510875  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:41.510898  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:41.510915  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:41.590783  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:41.590826  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:39.667344  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:42.172287  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:42.043022  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:44.543222  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:41.120670  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:43.622132  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:45.623483  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:44.136390  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:44.149935  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:44.150006  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:44.187807  585602 cri.go:89] found id: ""
	I1205 20:33:44.187846  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.187858  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:44.187866  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:44.187933  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:44.224937  585602 cri.go:89] found id: ""
	I1205 20:33:44.224965  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.224973  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:44.224978  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:44.225040  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:44.260230  585602 cri.go:89] found id: ""
	I1205 20:33:44.260274  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.260287  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:44.260297  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:44.260439  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:44.296410  585602 cri.go:89] found id: ""
	I1205 20:33:44.296439  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.296449  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:44.296455  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:44.296507  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:44.332574  585602 cri.go:89] found id: ""
	I1205 20:33:44.332623  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.332635  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:44.332642  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:44.332709  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:44.368925  585602 cri.go:89] found id: ""
	I1205 20:33:44.368973  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.368985  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:44.368994  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:44.369068  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:44.410041  585602 cri.go:89] found id: ""
	I1205 20:33:44.410075  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.410088  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:44.410095  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:44.410165  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:44.454254  585602 cri.go:89] found id: ""
	I1205 20:33:44.454295  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.454316  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:44.454330  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:44.454346  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:44.507604  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:44.507669  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:44.525172  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:44.525219  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:44.599417  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:44.599446  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:44.599465  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:44.681624  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:44.681685  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:44.664942  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:47.163452  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:47.043225  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:49.044675  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:48.120302  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:50.120568  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:47.230092  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:47.243979  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:47.244076  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:47.280346  585602 cri.go:89] found id: ""
	I1205 20:33:47.280376  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.280385  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:47.280392  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:47.280448  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:47.316454  585602 cri.go:89] found id: ""
	I1205 20:33:47.316479  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.316487  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:47.316493  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:47.316546  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:47.353339  585602 cri.go:89] found id: ""
	I1205 20:33:47.353374  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.353386  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:47.353395  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:47.353466  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:47.388256  585602 cri.go:89] found id: ""
	I1205 20:33:47.388319  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.388330  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:47.388339  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:47.388408  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:47.424907  585602 cri.go:89] found id: ""
	I1205 20:33:47.424942  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.424953  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:47.424961  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:47.425035  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:47.461386  585602 cri.go:89] found id: ""
	I1205 20:33:47.461416  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.461425  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:47.461431  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:47.461485  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:47.501092  585602 cri.go:89] found id: ""
	I1205 20:33:47.501121  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.501130  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:47.501136  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:47.501189  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:47.559478  585602 cri.go:89] found id: ""
	I1205 20:33:47.559507  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.559520  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:47.559533  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:47.559551  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:47.609761  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:47.609800  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:47.626579  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:47.626606  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:47.713490  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:47.713520  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:47.713540  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:47.795346  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:47.795398  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:50.339441  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:50.353134  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:50.353216  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:50.393950  585602 cri.go:89] found id: ""
	I1205 20:33:50.393979  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.393990  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:50.394007  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:50.394074  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:50.431166  585602 cri.go:89] found id: ""
	I1205 20:33:50.431201  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.431212  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:50.431221  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:50.431291  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:50.472641  585602 cri.go:89] found id: ""
	I1205 20:33:50.472674  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.472684  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:50.472692  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:50.472763  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:50.512111  585602 cri.go:89] found id: ""
	I1205 20:33:50.512152  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.512165  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:50.512173  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:50.512247  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:50.554500  585602 cri.go:89] found id: ""
	I1205 20:33:50.554536  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.554549  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:50.554558  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:50.554625  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:50.590724  585602 cri.go:89] found id: ""
	I1205 20:33:50.590755  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.590764  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:50.590771  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:50.590837  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:50.628640  585602 cri.go:89] found id: ""
	I1205 20:33:50.628666  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.628675  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:50.628681  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:50.628732  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:50.670009  585602 cri.go:89] found id: ""
	I1205 20:33:50.670039  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.670047  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:50.670063  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:50.670075  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:50.684236  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:50.684290  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:50.757761  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:50.757790  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:50.757813  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:50.839665  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:50.839720  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:50.881087  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:50.881122  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:49.164986  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:51.665655  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:51.543286  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:53.543689  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:52.621297  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:54.621764  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:53.433345  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:53.446747  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:53.446819  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:53.482928  585602 cri.go:89] found id: ""
	I1205 20:33:53.482967  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.482979  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:53.482988  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:53.483048  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:53.519096  585602 cri.go:89] found id: ""
	I1205 20:33:53.519128  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.519136  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:53.519142  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:53.519196  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:53.556207  585602 cri.go:89] found id: ""
	I1205 20:33:53.556233  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.556243  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:53.556249  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:53.556346  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:53.589708  585602 cri.go:89] found id: ""
	I1205 20:33:53.589736  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.589745  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:53.589758  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:53.589813  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:53.630344  585602 cri.go:89] found id: ""
	I1205 20:33:53.630371  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.630380  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:53.630386  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:53.630438  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:53.668895  585602 cri.go:89] found id: ""
	I1205 20:33:53.668921  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.668929  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:53.668935  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:53.668987  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:53.706601  585602 cri.go:89] found id: ""
	I1205 20:33:53.706628  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.706638  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:53.706644  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:53.706704  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:53.744922  585602 cri.go:89] found id: ""
	I1205 20:33:53.744952  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.744960  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:53.744970  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:53.744989  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:53.823816  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:53.823853  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:53.823928  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:53.905075  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:53.905118  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:53.955424  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:53.955468  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:54.014871  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:54.014916  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:56.537142  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:56.550409  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:56.550478  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:56.587148  585602 cri.go:89] found id: ""
	I1205 20:33:56.587174  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.587184  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:56.587190  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:56.587249  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:56.625153  585602 cri.go:89] found id: ""
	I1205 20:33:56.625180  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.625188  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:56.625193  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:56.625243  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:56.671545  585602 cri.go:89] found id: ""
	I1205 20:33:56.671573  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.671582  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:56.671589  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:56.671652  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:56.712760  585602 cri.go:89] found id: ""
	I1205 20:33:56.712797  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.712810  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:56.712818  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:56.712890  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:56.751219  585602 cri.go:89] found id: ""
	I1205 20:33:56.751254  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.751266  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:56.751274  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:56.751340  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:56.787946  585602 cri.go:89] found id: ""
	I1205 20:33:56.787985  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.787998  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:56.788007  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:56.788101  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:56.823057  585602 cri.go:89] found id: ""
	I1205 20:33:56.823095  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.823108  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:56.823114  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:56.823170  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:54.164074  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:56.165063  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:56.043193  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:58.044158  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:00.542798  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:56.624407  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:59.119743  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:56.860358  585602 cri.go:89] found id: ""
	I1205 20:33:56.860396  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.860408  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:56.860421  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:56.860438  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:56.912954  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:56.912996  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:56.927642  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:56.927691  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:57.007316  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:57.007344  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:57.007359  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:57.091471  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:57.091522  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:59.642150  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:59.656240  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:59.656324  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:59.695918  585602 cri.go:89] found id: ""
	I1205 20:33:59.695954  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.695965  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:59.695973  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:59.696037  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:59.744218  585602 cri.go:89] found id: ""
	I1205 20:33:59.744250  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.744260  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:59.744278  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:59.744340  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:59.799035  585602 cri.go:89] found id: ""
	I1205 20:33:59.799081  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.799094  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:59.799102  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:59.799172  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:59.850464  585602 cri.go:89] found id: ""
	I1205 20:33:59.850505  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.850517  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:59.850526  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:59.850590  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:59.886441  585602 cri.go:89] found id: ""
	I1205 20:33:59.886477  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.886489  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:59.886497  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:59.886564  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:59.926689  585602 cri.go:89] found id: ""
	I1205 20:33:59.926728  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.926741  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:59.926751  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:59.926821  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:59.962615  585602 cri.go:89] found id: ""
	I1205 20:33:59.962644  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.962653  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:59.962659  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:59.962716  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:00.001852  585602 cri.go:89] found id: ""
	I1205 20:34:00.001878  585602 logs.go:282] 0 containers: []
	W1205 20:34:00.001886  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:00.001897  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:00.001913  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:00.055465  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:00.055508  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:00.071904  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:00.071941  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:00.151225  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:00.151248  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:00.151262  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:00.233869  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:00.233914  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:58.664773  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:00.664948  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:02.543019  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:04.543810  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:01.120136  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:03.120824  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:05.620283  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:02.776751  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:02.790868  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:02.790945  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:02.834686  585602 cri.go:89] found id: ""
	I1205 20:34:02.834719  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.834731  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:02.834740  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:02.834823  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:02.871280  585602 cri.go:89] found id: ""
	I1205 20:34:02.871313  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.871333  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:02.871342  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:02.871413  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:02.907300  585602 cri.go:89] found id: ""
	I1205 20:34:02.907336  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.907346  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:02.907352  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:02.907406  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:02.945453  585602 cri.go:89] found id: ""
	I1205 20:34:02.945487  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.945499  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:02.945511  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:02.945587  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:02.980528  585602 cri.go:89] found id: ""
	I1205 20:34:02.980561  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.980573  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:02.980580  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:02.980653  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:03.016919  585602 cri.go:89] found id: ""
	I1205 20:34:03.016946  585602 logs.go:282] 0 containers: []
	W1205 20:34:03.016955  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:03.016961  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:03.017012  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:03.053541  585602 cri.go:89] found id: ""
	I1205 20:34:03.053575  585602 logs.go:282] 0 containers: []
	W1205 20:34:03.053588  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:03.053596  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:03.053655  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:03.089907  585602 cri.go:89] found id: ""
	I1205 20:34:03.089946  585602 logs.go:282] 0 containers: []
	W1205 20:34:03.089959  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:03.089974  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:03.089991  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:03.144663  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:03.144700  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:03.160101  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:03.160140  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:03.231559  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:03.231583  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:03.231600  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:03.313226  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:03.313271  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:05.855538  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:05.869019  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:05.869120  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:05.906879  585602 cri.go:89] found id: ""
	I1205 20:34:05.906910  585602 logs.go:282] 0 containers: []
	W1205 20:34:05.906921  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:05.906928  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:05.906994  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:05.946846  585602 cri.go:89] found id: ""
	I1205 20:34:05.946881  585602 logs.go:282] 0 containers: []
	W1205 20:34:05.946893  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:05.946900  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:05.946968  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:05.984067  585602 cri.go:89] found id: ""
	I1205 20:34:05.984104  585602 logs.go:282] 0 containers: []
	W1205 20:34:05.984118  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:05.984127  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:05.984193  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:06.024984  585602 cri.go:89] found id: ""
	I1205 20:34:06.025014  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.025023  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:06.025029  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:06.025091  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:06.064766  585602 cri.go:89] found id: ""
	I1205 20:34:06.064794  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.064806  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:06.064821  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:06.064877  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:06.105652  585602 cri.go:89] found id: ""
	I1205 20:34:06.105683  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.105691  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:06.105698  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:06.105748  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:06.143732  585602 cri.go:89] found id: ""
	I1205 20:34:06.143762  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.143773  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:06.143781  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:06.143857  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:06.183397  585602 cri.go:89] found id: ""
	I1205 20:34:06.183429  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.183439  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:06.183449  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:06.183462  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:06.236403  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:06.236449  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:06.250728  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:06.250759  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:06.320983  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:06.321009  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:06.321025  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:06.408037  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:06.408084  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:03.164354  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:05.665345  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:07.044218  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:09.543580  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:08.119532  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:10.119918  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:08.955959  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:08.968956  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:08.969037  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:09.002804  585602 cri.go:89] found id: ""
	I1205 20:34:09.002846  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.002859  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:09.002866  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:09.002935  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:09.039098  585602 cri.go:89] found id: ""
	I1205 20:34:09.039191  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.039210  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:09.039220  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:09.039291  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:09.074727  585602 cri.go:89] found id: ""
	I1205 20:34:09.074764  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.074776  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:09.074792  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:09.074861  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:09.112650  585602 cri.go:89] found id: ""
	I1205 20:34:09.112682  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.112692  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:09.112698  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:09.112754  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:09.149301  585602 cri.go:89] found id: ""
	I1205 20:34:09.149346  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.149359  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:09.149368  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:09.149432  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:09.190288  585602 cri.go:89] found id: ""
	I1205 20:34:09.190317  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.190329  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:09.190338  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:09.190404  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:09.225311  585602 cri.go:89] found id: ""
	I1205 20:34:09.225348  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.225361  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:09.225369  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:09.225435  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:09.261023  585602 cri.go:89] found id: ""
	I1205 20:34:09.261052  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.261063  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:09.261075  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:09.261092  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:09.313733  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:09.313785  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:09.329567  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:09.329619  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:09.403397  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:09.403430  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:09.403447  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:09.486586  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:09.486630  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:08.163730  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:10.663603  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:12.665663  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:11.544538  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:14.042854  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:12.120629  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:14.621977  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:12.028110  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:12.041802  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:12.041866  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:12.080349  585602 cri.go:89] found id: ""
	I1205 20:34:12.080388  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.080402  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:12.080410  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:12.080475  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:12.121455  585602 cri.go:89] found id: ""
	I1205 20:34:12.121486  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.121499  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:12.121507  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:12.121567  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:12.157743  585602 cri.go:89] found id: ""
	I1205 20:34:12.157768  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.157785  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:12.157794  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:12.157855  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:12.196901  585602 cri.go:89] found id: ""
	I1205 20:34:12.196933  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.196946  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:12.196954  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:12.197024  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:12.234471  585602 cri.go:89] found id: ""
	I1205 20:34:12.234500  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.234508  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:12.234516  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:12.234585  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:12.269238  585602 cri.go:89] found id: ""
	I1205 20:34:12.269263  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.269271  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:12.269278  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:12.269340  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:12.307965  585602 cri.go:89] found id: ""
	I1205 20:34:12.308006  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.308016  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:12.308022  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:12.308081  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:12.343463  585602 cri.go:89] found id: ""
	I1205 20:34:12.343497  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.343510  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:12.343536  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:12.343574  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:12.393393  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:12.393437  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:12.407991  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:12.408025  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:12.477868  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:12.477910  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:12.477924  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:12.557274  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:12.557315  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:15.102587  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:15.115734  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:15.115808  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:15.153057  585602 cri.go:89] found id: ""
	I1205 20:34:15.153091  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.153105  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:15.153113  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:15.153182  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:15.192762  585602 cri.go:89] found id: ""
	I1205 20:34:15.192815  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.192825  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:15.192831  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:15.192887  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:15.231330  585602 cri.go:89] found id: ""
	I1205 20:34:15.231364  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.231374  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:15.231380  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:15.231435  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:15.265229  585602 cri.go:89] found id: ""
	I1205 20:34:15.265262  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.265271  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:15.265278  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:15.265350  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:15.299596  585602 cri.go:89] found id: ""
	I1205 20:34:15.299624  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.299634  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:15.299640  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:15.299699  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:15.336155  585602 cri.go:89] found id: ""
	I1205 20:34:15.336187  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.336195  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:15.336202  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:15.336256  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:15.371867  585602 cri.go:89] found id: ""
	I1205 20:34:15.371899  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.371909  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:15.371920  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:15.371976  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:15.408536  585602 cri.go:89] found id: ""
	I1205 20:34:15.408566  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.408580  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:15.408592  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:15.408609  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:15.422499  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:15.422538  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:15.495096  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:15.495131  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:15.495145  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:15.571411  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:15.571461  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:15.612284  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:15.612319  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:15.165343  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:17.165619  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:16.043962  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:18.542495  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:17.119936  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:19.622046  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:18.168869  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:18.184247  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:18.184370  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:18.226078  585602 cri.go:89] found id: ""
	I1205 20:34:18.226112  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.226124  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:18.226133  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:18.226202  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:18.266221  585602 cri.go:89] found id: ""
	I1205 20:34:18.266258  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.266270  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:18.266278  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:18.266349  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:18.305876  585602 cri.go:89] found id: ""
	I1205 20:34:18.305903  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.305912  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:18.305921  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:18.305971  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:18.342044  585602 cri.go:89] found id: ""
	I1205 20:34:18.342077  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.342089  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:18.342098  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:18.342160  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:18.380240  585602 cri.go:89] found id: ""
	I1205 20:34:18.380290  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.380301  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:18.380310  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:18.380372  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:18.416228  585602 cri.go:89] found id: ""
	I1205 20:34:18.416258  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.416301  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:18.416311  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:18.416380  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:18.453368  585602 cri.go:89] found id: ""
	I1205 20:34:18.453407  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.453420  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:18.453429  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:18.453513  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:18.491689  585602 cri.go:89] found id: ""
	I1205 20:34:18.491727  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.491739  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:18.491754  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:18.491779  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:18.546614  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:18.546652  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:18.560516  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:18.560547  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:18.637544  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:18.637568  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:18.637582  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:18.720410  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:18.720453  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:21.261494  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:21.276378  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:21.276473  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:21.317571  585602 cri.go:89] found id: ""
	I1205 20:34:21.317602  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.317610  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:21.317617  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:21.317670  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:21.355174  585602 cri.go:89] found id: ""
	I1205 20:34:21.355202  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.355210  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:21.355217  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:21.355277  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:21.393259  585602 cri.go:89] found id: ""
	I1205 20:34:21.393297  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.393310  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:21.393317  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:21.393408  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:21.432286  585602 cri.go:89] found id: ""
	I1205 20:34:21.432329  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.432341  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:21.432348  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:21.432415  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:21.469844  585602 cri.go:89] found id: ""
	I1205 20:34:21.469877  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.469888  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:21.469896  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:21.469964  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:21.508467  585602 cri.go:89] found id: ""
	I1205 20:34:21.508507  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.508519  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:21.508528  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:21.508592  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:21.553053  585602 cri.go:89] found id: ""
	I1205 20:34:21.553185  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.553208  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:21.553226  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:21.553317  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:21.590595  585602 cri.go:89] found id: ""
	I1205 20:34:21.590629  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.590640  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:21.590654  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:21.590672  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:21.649493  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:21.649546  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:21.666114  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:21.666147  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:21.742801  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:21.742828  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:21.742858  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:21.822949  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:21.823010  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:19.165951  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:21.664450  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:21.043233  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:23.043477  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:25.543490  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:22.119177  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:24.119685  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:24.366575  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:24.380894  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:24.380992  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:24.416907  585602 cri.go:89] found id: ""
	I1205 20:34:24.416943  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.416956  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:24.416965  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:24.417034  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:24.453303  585602 cri.go:89] found id: ""
	I1205 20:34:24.453337  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.453349  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:24.453358  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:24.453445  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:24.496795  585602 cri.go:89] found id: ""
	I1205 20:34:24.496825  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.496833  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:24.496839  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:24.496907  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:24.539105  585602 cri.go:89] found id: ""
	I1205 20:34:24.539142  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.539154  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:24.539162  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:24.539230  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:24.576778  585602 cri.go:89] found id: ""
	I1205 20:34:24.576808  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.576816  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:24.576822  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:24.576879  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:24.617240  585602 cri.go:89] found id: ""
	I1205 20:34:24.617271  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.617280  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:24.617293  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:24.617374  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:24.659274  585602 cri.go:89] found id: ""
	I1205 20:34:24.659316  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.659330  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:24.659342  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:24.659408  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:24.701047  585602 cri.go:89] found id: ""
	I1205 20:34:24.701092  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.701105  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:24.701121  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:24.701139  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:24.741070  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:24.741115  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:24.793364  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:24.793407  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:24.807803  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:24.807839  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:24.883194  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:24.883225  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:24.883243  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:24.163198  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:26.165402  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:27.544607  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:30.044244  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:26.619847  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:28.621467  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:30.621704  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:27.467460  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:27.483055  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:27.483129  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:27.523718  585602 cri.go:89] found id: ""
	I1205 20:34:27.523752  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.523763  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:27.523772  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:27.523841  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:27.562872  585602 cri.go:89] found id: ""
	I1205 20:34:27.562899  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.562908  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:27.562915  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:27.562976  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:27.601804  585602 cri.go:89] found id: ""
	I1205 20:34:27.601835  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.601845  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:27.601852  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:27.601916  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:27.640553  585602 cri.go:89] found id: ""
	I1205 20:34:27.640589  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.640599  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:27.640605  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:27.640672  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:27.680983  585602 cri.go:89] found id: ""
	I1205 20:34:27.681015  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.681027  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:27.681035  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:27.681105  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:27.720766  585602 cri.go:89] found id: ""
	I1205 20:34:27.720811  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.720821  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:27.720828  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:27.720886  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:27.761422  585602 cri.go:89] found id: ""
	I1205 20:34:27.761453  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.761466  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:27.761480  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:27.761550  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:27.799658  585602 cri.go:89] found id: ""
	I1205 20:34:27.799692  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.799705  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:27.799720  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:27.799736  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:27.851801  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:27.851845  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:27.865953  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:27.865984  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:27.941787  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:27.941824  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:27.941840  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:28.023556  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:28.023616  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:30.573267  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:30.586591  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:30.586679  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:30.629923  585602 cri.go:89] found id: ""
	I1205 20:34:30.629960  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.629974  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:30.629982  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:30.630048  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:30.667045  585602 cri.go:89] found id: ""
	I1205 20:34:30.667078  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.667090  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:30.667098  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:30.667167  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:30.704479  585602 cri.go:89] found id: ""
	I1205 20:34:30.704510  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.704522  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:30.704530  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:30.704620  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:30.746035  585602 cri.go:89] found id: ""
	I1205 20:34:30.746065  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.746077  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:30.746085  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:30.746161  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:30.784375  585602 cri.go:89] found id: ""
	I1205 20:34:30.784415  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.784425  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:30.784431  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:30.784487  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:30.821779  585602 cri.go:89] found id: ""
	I1205 20:34:30.821811  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.821822  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:30.821831  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:30.821905  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:30.856927  585602 cri.go:89] found id: ""
	I1205 20:34:30.856963  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.856976  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:30.856984  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:30.857088  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:30.895852  585602 cri.go:89] found id: ""
	I1205 20:34:30.895882  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.895894  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:30.895914  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:30.895930  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:30.947600  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:30.947642  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:30.962717  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:30.962753  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:31.049225  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:31.049262  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:31.049280  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:31.126806  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:31.126850  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:28.665006  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:31.164172  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:32.548634  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:35.042159  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:33.120370  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:35.621247  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:33.670844  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:33.685063  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:33.685160  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:33.718277  585602 cri.go:89] found id: ""
	I1205 20:34:33.718312  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.718321  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:33.718327  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:33.718378  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:33.755409  585602 cri.go:89] found id: ""
	I1205 20:34:33.755445  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.755456  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:33.755465  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:33.755542  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:33.809447  585602 cri.go:89] found id: ""
	I1205 20:34:33.809506  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.809519  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:33.809527  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:33.809599  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:33.848327  585602 cri.go:89] found id: ""
	I1205 20:34:33.848362  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.848376  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:33.848384  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:33.848444  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:33.887045  585602 cri.go:89] found id: ""
	I1205 20:34:33.887082  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.887094  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:33.887103  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:33.887178  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:33.924385  585602 cri.go:89] found id: ""
	I1205 20:34:33.924418  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.924427  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:33.924434  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:33.924499  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:33.960711  585602 cri.go:89] found id: ""
	I1205 20:34:33.960738  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.960747  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:33.960757  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:33.960808  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:33.998150  585602 cri.go:89] found id: ""
	I1205 20:34:33.998184  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.998193  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:33.998203  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:33.998215  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:34.041977  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:34.042006  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:34.095895  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:34.095940  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:34.109802  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:34.109836  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:34.185716  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:34.185740  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:34.185753  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:36.767768  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:36.782114  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:36.782201  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:36.820606  585602 cri.go:89] found id: ""
	I1205 20:34:36.820647  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.820659  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:36.820668  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:36.820736  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:33.164572  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:35.664069  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:37.043102  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:39.544667  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:38.120555  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:40.619948  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:36.858999  585602 cri.go:89] found id: ""
	I1205 20:34:36.859033  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.859044  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:36.859051  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:36.859117  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:36.896222  585602 cri.go:89] found id: ""
	I1205 20:34:36.896257  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.896282  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:36.896290  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:36.896352  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:36.935565  585602 cri.go:89] found id: ""
	I1205 20:34:36.935602  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.935612  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:36.935618  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:36.935671  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:36.974031  585602 cri.go:89] found id: ""
	I1205 20:34:36.974066  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.974079  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:36.974096  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:36.974166  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:37.018243  585602 cri.go:89] found id: ""
	I1205 20:34:37.018278  585602 logs.go:282] 0 containers: []
	W1205 20:34:37.018290  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:37.018300  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:37.018371  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:37.057715  585602 cri.go:89] found id: ""
	I1205 20:34:37.057742  585602 logs.go:282] 0 containers: []
	W1205 20:34:37.057750  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:37.057756  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:37.057806  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:37.099006  585602 cri.go:89] found id: ""
	I1205 20:34:37.099037  585602 logs.go:282] 0 containers: []
	W1205 20:34:37.099045  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:37.099055  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:37.099070  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:37.186218  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:37.186264  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:37.232921  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:37.232955  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:37.285539  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:37.285581  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:37.301115  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:37.301155  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:37.373249  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:39.873692  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:39.887772  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:39.887847  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:39.925558  585602 cri.go:89] found id: ""
	I1205 20:34:39.925595  585602 logs.go:282] 0 containers: []
	W1205 20:34:39.925607  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:39.925615  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:39.925684  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:39.964967  585602 cri.go:89] found id: ""
	I1205 20:34:39.964994  585602 logs.go:282] 0 containers: []
	W1205 20:34:39.965004  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:39.965011  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:39.965073  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:40.010875  585602 cri.go:89] found id: ""
	I1205 20:34:40.010911  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.010923  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:40.010930  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:40.011003  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:40.050940  585602 cri.go:89] found id: ""
	I1205 20:34:40.050970  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.050981  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:40.050990  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:40.051052  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:40.086157  585602 cri.go:89] found id: ""
	I1205 20:34:40.086197  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.086210  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:40.086219  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:40.086283  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:40.123280  585602 cri.go:89] found id: ""
	I1205 20:34:40.123321  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.123333  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:40.123344  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:40.123414  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:40.164755  585602 cri.go:89] found id: ""
	I1205 20:34:40.164784  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.164793  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:40.164800  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:40.164871  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:40.211566  585602 cri.go:89] found id: ""
	I1205 20:34:40.211595  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.211608  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:40.211621  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:40.211638  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:40.275269  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:40.275326  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:40.303724  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:40.303754  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:40.377315  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:40.377345  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:40.377360  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:40.457744  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:40.457794  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:38.163598  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:40.164173  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:42.663952  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:42.043947  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:44.542445  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:42.621824  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:45.120127  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:43.000390  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:43.015220  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:43.015308  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:43.051919  585602 cri.go:89] found id: ""
	I1205 20:34:43.051946  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.051955  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:43.051961  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:43.052034  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:43.088188  585602 cri.go:89] found id: ""
	I1205 20:34:43.088230  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.088241  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:43.088249  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:43.088350  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:43.125881  585602 cri.go:89] found id: ""
	I1205 20:34:43.125910  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.125922  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:43.125930  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:43.125988  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:43.166630  585602 cri.go:89] found id: ""
	I1205 20:34:43.166657  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.166674  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:43.166682  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:43.166744  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:43.206761  585602 cri.go:89] found id: ""
	I1205 20:34:43.206791  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.206803  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:43.206810  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:43.206873  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:43.242989  585602 cri.go:89] found id: ""
	I1205 20:34:43.243017  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.243026  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:43.243033  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:43.243094  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:43.281179  585602 cri.go:89] found id: ""
	I1205 20:34:43.281208  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.281217  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:43.281223  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:43.281272  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:43.317283  585602 cri.go:89] found id: ""
	I1205 20:34:43.317314  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.317326  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:43.317347  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:43.317362  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:43.369262  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:43.369303  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:43.386137  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:43.386182  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:43.458532  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:43.458553  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:43.458566  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:43.538254  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:43.538296  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:46.083593  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:46.101024  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:46.101133  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:46.169786  585602 cri.go:89] found id: ""
	I1205 20:34:46.169817  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.169829  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:46.169838  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:46.169905  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:46.218647  585602 cri.go:89] found id: ""
	I1205 20:34:46.218689  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.218704  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:46.218713  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:46.218790  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:46.262718  585602 cri.go:89] found id: ""
	I1205 20:34:46.262749  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.262758  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:46.262764  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:46.262846  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:46.301606  585602 cri.go:89] found id: ""
	I1205 20:34:46.301638  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.301649  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:46.301656  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:46.301714  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:46.337313  585602 cri.go:89] found id: ""
	I1205 20:34:46.337347  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.337356  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:46.337362  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:46.337422  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:46.380171  585602 cri.go:89] found id: ""
	I1205 20:34:46.380201  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.380209  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:46.380215  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:46.380288  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:46.423054  585602 cri.go:89] found id: ""
	I1205 20:34:46.423089  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.423101  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:46.423109  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:46.423178  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:46.467615  585602 cri.go:89] found id: ""
	I1205 20:34:46.467647  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.467659  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:46.467673  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:46.467687  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:46.522529  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:46.522579  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:46.537146  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:46.537199  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:46.609585  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:46.609618  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:46.609637  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:46.696093  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:46.696152  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:45.164249  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:47.664159  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:46.547883  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:49.043793  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:47.623375  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:50.122680  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:49.238735  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:49.256406  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:49.256484  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:49.294416  585602 cri.go:89] found id: ""
	I1205 20:34:49.294449  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.294458  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:49.294467  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:49.294528  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:49.334235  585602 cri.go:89] found id: ""
	I1205 20:34:49.334268  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.334282  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:49.334290  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:49.334362  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:49.372560  585602 cri.go:89] found id: ""
	I1205 20:34:49.372637  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.372662  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:49.372674  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:49.372756  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:49.413779  585602 cri.go:89] found id: ""
	I1205 20:34:49.413813  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.413822  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:49.413829  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:49.413900  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:49.449513  585602 cri.go:89] found id: ""
	I1205 20:34:49.449543  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.449553  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:49.449560  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:49.449630  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:49.488923  585602 cri.go:89] found id: ""
	I1205 20:34:49.488961  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.488973  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:49.488982  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:49.489050  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:49.524922  585602 cri.go:89] found id: ""
	I1205 20:34:49.524959  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.524971  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:49.524980  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:49.525048  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:49.565700  585602 cri.go:89] found id: ""
	I1205 20:34:49.565735  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.565745  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:49.565756  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:49.565769  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:49.624297  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:49.624339  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:49.641424  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:49.641465  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:49.721474  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:49.721504  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:49.721517  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:49.810777  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:49.810822  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:49.664998  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:52.163337  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:51.543015  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:54.045218  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:52.621649  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:55.120035  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:52.354661  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:52.368481  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:52.368555  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:52.407081  585602 cri.go:89] found id: ""
	I1205 20:34:52.407110  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.407118  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:52.407125  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:52.407189  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:52.444462  585602 cri.go:89] found id: ""
	I1205 20:34:52.444489  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.444498  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:52.444505  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:52.444562  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:52.483546  585602 cri.go:89] found id: ""
	I1205 20:34:52.483573  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.483582  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:52.483595  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:52.483648  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:52.526529  585602 cri.go:89] found id: ""
	I1205 20:34:52.526567  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.526579  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:52.526587  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:52.526655  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:52.564875  585602 cri.go:89] found id: ""
	I1205 20:34:52.564904  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.564913  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:52.564919  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:52.564984  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:52.599367  585602 cri.go:89] found id: ""
	I1205 20:34:52.599397  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.599410  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:52.599419  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:52.599475  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:52.638192  585602 cri.go:89] found id: ""
	I1205 20:34:52.638233  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.638247  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:52.638255  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:52.638336  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:52.675227  585602 cri.go:89] found id: ""
	I1205 20:34:52.675264  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.675275  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:52.675287  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:52.675311  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:52.716538  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:52.716582  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:52.772121  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:52.772162  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:52.787598  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:52.787632  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:52.865380  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:52.865408  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:52.865422  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:55.449288  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:55.462386  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:55.462474  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:55.498350  585602 cri.go:89] found id: ""
	I1205 20:34:55.498382  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.498391  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:55.498397  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:55.498457  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:55.540878  585602 cri.go:89] found id: ""
	I1205 20:34:55.540915  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.540929  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:55.540939  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:55.541022  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:55.577248  585602 cri.go:89] found id: ""
	I1205 20:34:55.577277  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.577288  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:55.577294  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:55.577375  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:55.615258  585602 cri.go:89] found id: ""
	I1205 20:34:55.615287  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.615308  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:55.615316  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:55.615384  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:55.652102  585602 cri.go:89] found id: ""
	I1205 20:34:55.652136  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.652147  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:55.652157  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:55.652228  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:55.689353  585602 cri.go:89] found id: ""
	I1205 20:34:55.689387  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.689399  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:55.689408  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:55.689486  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:55.727603  585602 cri.go:89] found id: ""
	I1205 20:34:55.727634  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.727648  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:55.727657  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:55.727729  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:55.765103  585602 cri.go:89] found id: ""
	I1205 20:34:55.765134  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.765143  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:55.765156  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:55.765169  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:55.823878  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:55.823923  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:55.838966  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:55.839001  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:55.909385  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:55.909412  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:55.909424  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:55.992036  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:55.992080  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:54.165488  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:56.166030  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:56.542663  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:58.543260  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:57.120140  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:59.621190  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:58.537231  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:58.552307  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:58.552392  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:58.589150  585602 cri.go:89] found id: ""
	I1205 20:34:58.589184  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.589200  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:58.589206  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:58.589272  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:58.630344  585602 cri.go:89] found id: ""
	I1205 20:34:58.630370  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.630378  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:58.630385  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:58.630452  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:58.669953  585602 cri.go:89] found id: ""
	I1205 20:34:58.669981  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.669991  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:58.669999  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:58.670055  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:58.708532  585602 cri.go:89] found id: ""
	I1205 20:34:58.708562  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.708570  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:58.708577  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:58.708631  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:58.745944  585602 cri.go:89] found id: ""
	I1205 20:34:58.745975  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.745986  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:58.745994  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:58.746051  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:58.787177  585602 cri.go:89] found id: ""
	I1205 20:34:58.787206  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.787214  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:58.787221  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:58.787272  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:58.822084  585602 cri.go:89] found id: ""
	I1205 20:34:58.822123  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.822134  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:58.822142  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:58.822210  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:58.858608  585602 cri.go:89] found id: ""
	I1205 20:34:58.858645  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.858657  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:58.858670  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:58.858691  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:58.873289  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:58.873322  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:58.947855  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:58.947884  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:58.947900  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:59.028348  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:59.028397  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:59.069172  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:59.069206  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:01.623309  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:01.637362  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:01.637449  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:01.678867  585602 cri.go:89] found id: ""
	I1205 20:35:01.678907  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.678919  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:01.678928  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:01.679001  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:01.715333  585602 cri.go:89] found id: ""
	I1205 20:35:01.715364  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.715372  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:01.715379  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:01.715439  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:01.754247  585602 cri.go:89] found id: ""
	I1205 20:35:01.754277  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.754286  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:01.754292  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:01.754348  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:01.791922  585602 cri.go:89] found id: ""
	I1205 20:35:01.791957  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.791968  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:01.791977  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:01.792045  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:01.827261  585602 cri.go:89] found id: ""
	I1205 20:35:01.827294  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.827307  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:01.827315  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:01.827389  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:58.665248  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:01.163431  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:01.043056  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:03.543015  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:02.122540  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:04.620544  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:01.864205  585602 cri.go:89] found id: ""
	I1205 20:35:01.864234  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.864243  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:01.864249  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:01.864332  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:01.902740  585602 cri.go:89] found id: ""
	I1205 20:35:01.902773  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.902783  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:01.902789  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:01.902857  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:01.941627  585602 cri.go:89] found id: ""
	I1205 20:35:01.941657  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.941666  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:01.941677  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:01.941690  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:01.995743  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:01.995791  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:02.010327  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:02.010368  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:02.086879  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:02.086907  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:02.086921  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:02.166500  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:02.166538  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:04.716638  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:04.730922  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:04.730992  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:04.768492  585602 cri.go:89] found id: ""
	I1205 20:35:04.768524  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.768534  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:04.768540  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:04.768606  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:04.803740  585602 cri.go:89] found id: ""
	I1205 20:35:04.803776  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.803789  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:04.803797  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:04.803866  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:04.840907  585602 cri.go:89] found id: ""
	I1205 20:35:04.840947  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.840960  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:04.840968  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:04.841036  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:04.875901  585602 cri.go:89] found id: ""
	I1205 20:35:04.875933  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.875943  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:04.875949  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:04.876003  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:04.913581  585602 cri.go:89] found id: ""
	I1205 20:35:04.913617  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.913627  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:04.913634  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:04.913689  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:04.952460  585602 cri.go:89] found id: ""
	I1205 20:35:04.952504  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.952519  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:04.952528  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:04.952617  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:04.989939  585602 cri.go:89] found id: ""
	I1205 20:35:04.989968  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.989979  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:04.989985  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:04.990041  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:05.025017  585602 cri.go:89] found id: ""
	I1205 20:35:05.025052  585602 logs.go:282] 0 containers: []
	W1205 20:35:05.025066  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:05.025078  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:05.025094  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:05.068179  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:05.068223  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:05.127311  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:05.127369  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:05.141092  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:05.141129  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:05.217648  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:05.217678  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:05.217691  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:03.163987  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:05.164131  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:07.165804  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:06.043765  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:08.036400  585113 pod_ready.go:82] duration metric: took 4m0.000157493s for pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace to be "Ready" ...
	E1205 20:35:08.036457  585113 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace to be "Ready" (will not retry!)
	I1205 20:35:08.036489  585113 pod_ready.go:39] duration metric: took 4m11.05050249s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:35:08.036554  585113 kubeadm.go:597] duration metric: took 4m18.178903617s to restartPrimaryControlPlane
	W1205 20:35:08.036733  585113 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 20:35:08.036784  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:35:06.621887  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:09.119692  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:07.793457  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:07.808710  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:07.808778  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:07.846331  585602 cri.go:89] found id: ""
	I1205 20:35:07.846366  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.846380  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:07.846389  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:07.846462  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:07.881185  585602 cri.go:89] found id: ""
	I1205 20:35:07.881222  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.881236  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:07.881243  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:07.881307  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:07.918463  585602 cri.go:89] found id: ""
	I1205 20:35:07.918501  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.918514  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:07.918522  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:07.918589  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:07.956329  585602 cri.go:89] found id: ""
	I1205 20:35:07.956364  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.956375  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:07.956385  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:07.956456  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:07.992173  585602 cri.go:89] found id: ""
	I1205 20:35:07.992212  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.992222  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:07.992229  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:07.992318  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:08.030183  585602 cri.go:89] found id: ""
	I1205 20:35:08.030214  585602 logs.go:282] 0 containers: []
	W1205 20:35:08.030226  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:08.030235  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:08.030309  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:08.072320  585602 cri.go:89] found id: ""
	I1205 20:35:08.072362  585602 logs.go:282] 0 containers: []
	W1205 20:35:08.072374  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:08.072382  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:08.072452  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:08.124220  585602 cri.go:89] found id: ""
	I1205 20:35:08.124253  585602 logs.go:282] 0 containers: []
	W1205 20:35:08.124277  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:08.124292  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:08.124310  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:08.171023  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:08.171057  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:08.237645  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:08.237699  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:08.252708  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:08.252744  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:08.343107  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:08.343140  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:08.343158  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:10.919646  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:10.934494  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:10.934562  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:10.971816  585602 cri.go:89] found id: ""
	I1205 20:35:10.971855  585602 logs.go:282] 0 containers: []
	W1205 20:35:10.971868  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:10.971878  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:10.971950  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:11.010031  585602 cri.go:89] found id: ""
	I1205 20:35:11.010071  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.010084  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:11.010095  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:11.010170  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:11.046520  585602 cri.go:89] found id: ""
	I1205 20:35:11.046552  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.046561  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:11.046568  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:11.046632  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:11.081385  585602 cri.go:89] found id: ""
	I1205 20:35:11.081426  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.081440  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:11.081448  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:11.081522  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:11.122529  585602 cri.go:89] found id: ""
	I1205 20:35:11.122559  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.122568  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:11.122576  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:11.122656  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:11.161684  585602 cri.go:89] found id: ""
	I1205 20:35:11.161767  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.161788  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:11.161797  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:11.161862  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:11.199796  585602 cri.go:89] found id: ""
	I1205 20:35:11.199824  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.199833  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:11.199842  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:11.199916  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:11.235580  585602 cri.go:89] found id: ""
	I1205 20:35:11.235617  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.235625  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:11.235635  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:11.235647  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:11.291005  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:11.291055  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:11.305902  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:11.305947  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:11.375862  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:11.375894  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:11.375915  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:11.456701  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:11.456746  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:09.663952  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:11.664200  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:11.119954  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:13.120903  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:15.622247  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:14.006509  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:14.020437  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:14.020531  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:14.056878  585602 cri.go:89] found id: ""
	I1205 20:35:14.056905  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.056915  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:14.056923  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:14.056993  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:14.091747  585602 cri.go:89] found id: ""
	I1205 20:35:14.091782  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.091792  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:14.091800  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:14.091860  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:14.131409  585602 cri.go:89] found id: ""
	I1205 20:35:14.131440  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.131453  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:14.131461  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:14.131532  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:14.170726  585602 cri.go:89] found id: ""
	I1205 20:35:14.170754  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.170765  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:14.170773  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:14.170851  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:14.208619  585602 cri.go:89] found id: ""
	I1205 20:35:14.208654  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.208666  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:14.208674  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:14.208747  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:14.247734  585602 cri.go:89] found id: ""
	I1205 20:35:14.247771  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.247784  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:14.247793  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:14.247855  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:14.296090  585602 cri.go:89] found id: ""
	I1205 20:35:14.296119  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.296129  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:14.296136  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:14.296205  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:14.331009  585602 cri.go:89] found id: ""
	I1205 20:35:14.331037  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.331045  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:14.331057  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:14.331070  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:14.384877  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:14.384935  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:14.400458  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:14.400507  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:14.475745  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:14.475774  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:14.475787  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:14.553150  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:14.553192  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:14.164516  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:16.165316  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:18.119418  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:20.120499  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:17.095700  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:17.109135  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:17.109215  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:17.146805  585602 cri.go:89] found id: ""
	I1205 20:35:17.146838  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.146851  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:17.146861  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:17.146919  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:17.186861  585602 cri.go:89] found id: ""
	I1205 20:35:17.186891  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.186901  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:17.186907  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:17.186960  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:17.223113  585602 cri.go:89] found id: ""
	I1205 20:35:17.223148  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.223159  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:17.223166  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:17.223238  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:17.263066  585602 cri.go:89] found id: ""
	I1205 20:35:17.263098  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.263110  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:17.263118  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:17.263187  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:17.300113  585602 cri.go:89] found id: ""
	I1205 20:35:17.300153  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.300167  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:17.300175  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:17.300237  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:17.339135  585602 cri.go:89] found id: ""
	I1205 20:35:17.339172  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.339184  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:17.339193  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:17.339260  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:17.376200  585602 cri.go:89] found id: ""
	I1205 20:35:17.376229  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.376239  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:17.376248  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:17.376354  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:17.411852  585602 cri.go:89] found id: ""
	I1205 20:35:17.411895  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.411906  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:17.411919  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:17.411948  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:17.463690  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:17.463729  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:17.478912  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:17.478946  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:17.552874  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:17.552907  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:17.552933  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:17.633621  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:17.633667  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:20.175664  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:20.191495  585602 kubeadm.go:597] duration metric: took 4m4.568774806s to restartPrimaryControlPlane
	W1205 20:35:20.191570  585602 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 20:35:20.191594  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:35:20.660014  585602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:35:20.676684  585602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:35:20.688338  585602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:35:20.699748  585602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:35:20.699770  585602 kubeadm.go:157] found existing configuration files:
	
	I1205 20:35:20.699822  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:35:20.710417  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:35:20.710497  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:35:20.722295  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:35:20.732854  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:35:20.732933  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:35:20.744242  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:35:20.754593  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:35:20.754671  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:35:20.766443  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:35:20.777087  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:35:20.777157  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:35:20.788406  585602 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:35:20.869602  585602 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 20:35:20.869778  585602 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:35:21.022417  585602 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:35:21.022558  585602 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:35:21.022715  585602 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:35:21.213817  585602 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:35:21.216995  585602 out.go:235]   - Generating certificates and keys ...
	I1205 20:35:21.217146  585602 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:35:21.217240  585602 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:35:21.217373  585602 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:35:21.217502  585602 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:35:21.217614  585602 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:35:21.217699  585602 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 20:35:21.217784  585602 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:35:21.217876  585602 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:35:21.217985  585602 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:35:21.218129  585602 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:35:21.218186  585602 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 20:35:21.218289  585602 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:35:21.337924  585602 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:35:21.464355  585602 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:35:21.709734  585602 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:35:21.837040  585602 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:35:21.860767  585602 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:35:21.860894  585602 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:35:21.860934  585602 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:35:22.002564  585602 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:35:18.663978  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:20.665113  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:22.622593  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:25.120101  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:22.004407  585602 out.go:235]   - Booting up control plane ...
	I1205 20:35:22.004560  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:35:22.009319  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:35:22.010412  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:35:22.019041  585602 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:35:22.021855  585602 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:35:23.163493  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:25.164833  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:27.164914  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:27.619140  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:29.622476  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:29.664525  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:32.163413  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:34.411201  585113 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.37438104s)
	I1205 20:35:34.411295  585113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:35:34.428580  585113 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:35:34.439233  585113 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:35:34.450165  585113 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:35:34.450192  585113 kubeadm.go:157] found existing configuration files:
	
	I1205 20:35:34.450255  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:35:34.461910  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:35:34.461985  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:35:34.473936  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:35:34.484160  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:35:34.484240  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:35:34.495772  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:35:34.507681  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:35:34.507757  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:35:34.519932  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:35:34.532111  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:35:34.532190  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:35:34.543360  585113 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:35:34.594095  585113 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 20:35:34.594214  585113 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:35:34.712502  585113 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:35:34.712685  585113 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:35:34.712818  585113 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 20:35:34.729419  585113 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:35:34.731281  585113 out.go:235]   - Generating certificates and keys ...
	I1205 20:35:34.731395  585113 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:35:34.731486  585113 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:35:34.731614  585113 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:35:34.731715  585113 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:35:34.731812  585113 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:35:34.731902  585113 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 20:35:34.731994  585113 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:35:34.732082  585113 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:35:34.732179  585113 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:35:34.732252  585113 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:35:34.732336  585113 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 20:35:34.732428  585113 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:35:35.125135  585113 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:35:35.188591  585113 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 20:35:35.330713  585113 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:35:35.497785  585113 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:35:35.839010  585113 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:35:35.839656  585113 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:35:35.842311  585113 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:35:32.118898  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:34.119153  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:34.164007  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:36.164138  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:35.844403  585113 out.go:235]   - Booting up control plane ...
	I1205 20:35:35.844534  585113 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:35:35.844602  585113 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:35:35.845242  585113 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:35:35.865676  585113 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:35:35.871729  585113 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:35:35.871825  585113 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:35:36.007728  585113 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 20:35:36.007948  585113 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 20:35:36.510090  585113 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.141078ms
	I1205 20:35:36.510208  585113 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 20:35:36.119432  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:38.121093  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:40.620523  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:41.512166  585113 kubeadm.go:310] [api-check] The API server is healthy after 5.00243802s
	I1205 20:35:41.529257  585113 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:35:41.545958  585113 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:35:41.585500  585113 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:35:41.585726  585113 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-789000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:35:41.606394  585113 kubeadm.go:310] [bootstrap-token] Using token: j30n5x.myrhz9pya6yl1f1z
	I1205 20:35:41.608046  585113 out.go:235]   - Configuring RBAC rules ...
	I1205 20:35:41.608229  585113 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:35:41.616083  585113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:35:41.625777  585113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:35:41.629934  585113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:35:41.633726  585113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:35:41.640454  585113 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:35:41.923125  585113 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:35:42.363841  585113 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 20:35:42.924569  585113 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 20:35:42.924594  585113 kubeadm.go:310] 
	I1205 20:35:42.924660  585113 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 20:35:42.924668  585113 kubeadm.go:310] 
	I1205 20:35:42.924750  585113 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 20:35:42.924768  585113 kubeadm.go:310] 
	I1205 20:35:42.924802  585113 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 20:35:42.924865  585113 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:35:42.924926  585113 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:35:42.924969  585113 kubeadm.go:310] 
	I1205 20:35:42.925060  585113 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 20:35:42.925069  585113 kubeadm.go:310] 
	I1205 20:35:42.925120  585113 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:35:42.925154  585113 kubeadm.go:310] 
	I1205 20:35:42.925255  585113 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 20:35:42.925374  585113 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:35:42.925477  585113 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:35:42.925488  585113 kubeadm.go:310] 
	I1205 20:35:42.925604  585113 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:35:42.925691  585113 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 20:35:42.925701  585113 kubeadm.go:310] 
	I1205 20:35:42.925830  585113 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token j30n5x.myrhz9pya6yl1f1z \
	I1205 20:35:42.925966  585113 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 \
	I1205 20:35:42.926019  585113 kubeadm.go:310] 	--control-plane 
	I1205 20:35:42.926034  585113 kubeadm.go:310] 
	I1205 20:35:42.926136  585113 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:35:42.926147  585113 kubeadm.go:310] 
	I1205 20:35:42.926258  585113 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token j30n5x.myrhz9pya6yl1f1z \
	I1205 20:35:42.926400  585113 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 
	I1205 20:35:42.927105  585113 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:35:42.927269  585113 cni.go:84] Creating CNI manager for ""
	I1205 20:35:42.927283  585113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:35:42.929046  585113 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:35:38.164698  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:40.665499  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:42.930620  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:35:42.941706  585113 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:35:42.964041  585113 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:35:42.964154  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:42.964191  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-789000 minikube.k8s.io/updated_at=2024_12_05T20_35_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=embed-certs-789000 minikube.k8s.io/primary=true
	I1205 20:35:43.027876  585113 ops.go:34] apiserver oom_adj: -16
	I1205 20:35:43.203087  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:43.703446  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:44.203895  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:44.703277  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:45.203421  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:42.623820  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:45.118957  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:45.704129  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:46.203682  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:46.703213  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:47.203225  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:47.330051  585113 kubeadm.go:1113] duration metric: took 4.365966546s to wait for elevateKubeSystemPrivileges
	I1205 20:35:47.330104  585113 kubeadm.go:394] duration metric: took 4m57.530103825s to StartCluster
	I1205 20:35:47.330143  585113 settings.go:142] acquiring lock: {Name:mk53b9e6d652790a330d8f10370186624dd74692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:35:47.330296  585113 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:35:47.332937  585113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:35:47.333273  585113 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:35:47.333380  585113 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 20:35:47.333478  585113 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-789000"
	I1205 20:35:47.333500  585113 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-789000"
	I1205 20:35:47.333499  585113 addons.go:69] Setting default-storageclass=true in profile "embed-certs-789000"
	W1205 20:35:47.333510  585113 addons.go:243] addon storage-provisioner should already be in state true
	I1205 20:35:47.333523  585113 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-789000"
	I1205 20:35:47.333545  585113 host.go:66] Checking if "embed-certs-789000" exists ...
	I1205 20:35:47.333554  585113 config.go:182] Loaded profile config "embed-certs-789000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:35:47.333631  585113 addons.go:69] Setting metrics-server=true in profile "embed-certs-789000"
	I1205 20:35:47.333651  585113 addons.go:234] Setting addon metrics-server=true in "embed-certs-789000"
	W1205 20:35:47.333660  585113 addons.go:243] addon metrics-server should already be in state true
	I1205 20:35:47.333692  585113 host.go:66] Checking if "embed-certs-789000" exists ...
	I1205 20:35:47.334001  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.334043  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.334003  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.334101  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.334157  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.334339  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.335448  585113 out.go:177] * Verifying Kubernetes components...
	I1205 20:35:47.337056  585113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:35:47.353039  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33827
	I1205 20:35:47.353726  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.354437  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.354467  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.354870  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.355580  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.355654  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.355702  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43665
	I1205 20:35:47.355760  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46205
	I1205 20:35:47.356180  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.356224  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.356771  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.356796  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.356815  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.356834  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.357246  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.357245  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.357640  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetState
	I1205 20:35:47.357862  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.357916  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.361951  585113 addons.go:234] Setting addon default-storageclass=true in "embed-certs-789000"
	W1205 20:35:47.361974  585113 addons.go:243] addon default-storageclass should already be in state true
	I1205 20:35:47.362004  585113 host.go:66] Checking if "embed-certs-789000" exists ...
	I1205 20:35:47.362369  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.362416  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.372862  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37823
	I1205 20:35:47.373465  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.373983  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.374011  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.374347  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.374570  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetState
	I1205 20:35:47.376329  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:35:47.378476  585113 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:35:47.379882  585113 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:35:47.379909  585113 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:35:47.379933  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:35:47.382045  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44707
	I1205 20:35:47.382855  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.383440  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.383459  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.383563  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.383828  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.384092  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetState
	I1205 20:35:47.384101  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:35:47.384117  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.384150  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39829
	I1205 20:35:47.384381  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:35:47.384517  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:35:47.384635  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.384705  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:35:47.384850  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:35:47.385249  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.385262  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.385613  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.385744  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:35:47.386054  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.386085  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.387649  585113 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:35:43.164980  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:45.665449  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:47.665725  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:47.388998  585113 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:35:47.389011  585113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:35:47.389025  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:35:47.391724  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.392285  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:35:47.392317  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.392362  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:35:47.392521  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:35:47.392663  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:35:47.392804  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:35:47.402558  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45343
	I1205 20:35:47.403109  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.403636  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.403653  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.403977  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.404155  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetState
	I1205 20:35:47.405636  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:35:47.405859  585113 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:35:47.405876  585113 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:35:47.405894  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:35:47.408366  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.408827  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:35:47.408868  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.409107  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:35:47.409276  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:35:47.409436  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:35:47.409577  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:35:47.589046  585113 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:35:47.620164  585113 node_ready.go:35] waiting up to 6m0s for node "embed-certs-789000" to be "Ready" ...
	I1205 20:35:47.635800  585113 node_ready.go:49] node "embed-certs-789000" has status "Ready":"True"
	I1205 20:35:47.635824  585113 node_ready.go:38] duration metric: took 15.625152ms for node "embed-certs-789000" to be "Ready" ...
	I1205 20:35:47.635836  585113 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:35:47.647842  585113 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6mp2h" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:47.738529  585113 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:35:47.738558  585113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:35:47.741247  585113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:35:47.741443  585113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:35:47.822503  585113 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:35:47.822543  585113 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:35:47.886482  585113 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:35:47.886512  585113 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:35:47.926018  585113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:35:48.100013  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:48.100059  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:48.100371  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:48.100392  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:48.100408  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:48.100416  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:48.102261  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Closing plugin on server side
	I1205 20:35:48.102313  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:48.102342  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:48.115407  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:48.115429  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:48.115762  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:48.115859  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:48.115870  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Closing plugin on server side
	I1205 20:35:48.721035  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:48.721068  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:48.721380  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:48.721400  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:48.721447  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:48.721465  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:48.721855  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Closing plugin on server side
	I1205 20:35:48.721868  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:48.721880  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:49.294512  585113 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.36844122s)
	I1205 20:35:49.294581  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:49.294598  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:49.294953  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Closing plugin on server side
	I1205 20:35:49.295014  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:49.295028  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:49.295057  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:49.295071  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:49.295341  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Closing plugin on server side
	I1205 20:35:49.295391  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:49.295403  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:49.295414  585113 addons.go:475] Verifying addon metrics-server=true in "embed-certs-789000"
	I1205 20:35:49.297183  585113 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:35:49.298509  585113 addons.go:510] duration metric: took 1.965140064s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:35:49.657195  585113 pod_ready.go:103] pod "coredns-7c65d6cfc9-6mp2h" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:47.121445  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:49.622568  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:50.163712  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:52.165654  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:52.155012  585113 pod_ready.go:103] pod "coredns-7c65d6cfc9-6mp2h" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:54.155309  585113 pod_ready.go:93] pod "coredns-7c65d6cfc9-6mp2h" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:54.155346  585113 pod_ready.go:82] duration metric: took 6.507465102s for pod "coredns-7c65d6cfc9-6mp2h" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:54.155356  585113 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rh6pj" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:54.160866  585113 pod_ready.go:93] pod "coredns-7c65d6cfc9-rh6pj" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:54.160895  585113 pod_ready.go:82] duration metric: took 5.529623ms for pod "coredns-7c65d6cfc9-rh6pj" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:54.160909  585113 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:54.166444  585113 pod_ready.go:93] pod "etcd-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:54.166475  585113 pod_ready.go:82] duration metric: took 5.558605ms for pod "etcd-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:54.166487  585113 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:52.118202  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:54.119543  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:54.664661  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:57.162802  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:56.172832  585113 pod_ready.go:103] pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:57.173005  585113 pod_ready.go:93] pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:57.173052  585113 pod_ready.go:82] duration metric: took 3.006542827s for pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.173068  585113 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.178461  585113 pod_ready.go:93] pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:57.178489  585113 pod_ready.go:82] duration metric: took 5.413563ms for pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.178499  585113 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-znjpk" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.183130  585113 pod_ready.go:93] pod "kube-proxy-znjpk" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:57.183162  585113 pod_ready.go:82] duration metric: took 4.655743ms for pod "kube-proxy-znjpk" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.183178  585113 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.351816  585113 pod_ready.go:93] pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:57.351842  585113 pod_ready.go:82] duration metric: took 168.656328ms for pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.351851  585113 pod_ready.go:39] duration metric: took 9.716003373s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:35:57.351866  585113 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:35:57.351921  585113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:57.368439  585113 api_server.go:72] duration metric: took 10.035127798s to wait for apiserver process to appear ...
	I1205 20:35:57.368471  585113 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:35:57.368496  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:35:57.372531  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I1205 20:35:57.373449  585113 api_server.go:141] control plane version: v1.31.2
	I1205 20:35:57.373466  585113 api_server.go:131] duration metric: took 4.987422ms to wait for apiserver health ...
	I1205 20:35:57.373474  585113 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:35:57.554591  585113 system_pods.go:59] 9 kube-system pods found
	I1205 20:35:57.554620  585113 system_pods.go:61] "coredns-7c65d6cfc9-6mp2h" [01aaefd9-c549-4065-b3dd-a0e4d925e592] Running
	I1205 20:35:57.554625  585113 system_pods.go:61] "coredns-7c65d6cfc9-rh6pj" [4bdd8a47-abec-4dc4-a1ed-4a9a124417a3] Running
	I1205 20:35:57.554629  585113 system_pods.go:61] "etcd-embed-certs-789000" [356d7981-ab7a-40bf-866f-0285986f9a8d] Running
	I1205 20:35:57.554633  585113 system_pods.go:61] "kube-apiserver-embed-certs-789000" [bddc43d8-26f1-462b-a90b-8a4093bbb427] Running
	I1205 20:35:57.554637  585113 system_pods.go:61] "kube-controller-manager-embed-certs-789000" [800f92d7-e6e2-4cb8-9cc7-90595f4b512b] Running
	I1205 20:35:57.554640  585113 system_pods.go:61] "kube-proxy-znjpk" [f3df1a22-d7e0-4a83-84dd-0e710185ded6] Running
	I1205 20:35:57.554643  585113 system_pods.go:61] "kube-scheduler-embed-certs-789000" [327e3f02-3092-49fb-bfac-fc0485f02db3] Running
	I1205 20:35:57.554649  585113 system_pods.go:61] "metrics-server-6867b74b74-cs42k" [98b266c3-8ff0-4dc6-9c43-374dcd7c074a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:35:57.554653  585113 system_pods.go:61] "storage-provisioner" [2808c8da-8904-45a0-ae68-bfd68681540f] Running
	I1205 20:35:57.554660  585113 system_pods.go:74] duration metric: took 181.180919ms to wait for pod list to return data ...
	I1205 20:35:57.554667  585113 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:35:57.757196  585113 default_sa.go:45] found service account: "default"
	I1205 20:35:57.757226  585113 default_sa.go:55] duration metric: took 202.553823ms for default service account to be created ...
	I1205 20:35:57.757236  585113 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:35:57.956943  585113 system_pods.go:86] 9 kube-system pods found
	I1205 20:35:57.956976  585113 system_pods.go:89] "coredns-7c65d6cfc9-6mp2h" [01aaefd9-c549-4065-b3dd-a0e4d925e592] Running
	I1205 20:35:57.956982  585113 system_pods.go:89] "coredns-7c65d6cfc9-rh6pj" [4bdd8a47-abec-4dc4-a1ed-4a9a124417a3] Running
	I1205 20:35:57.956985  585113 system_pods.go:89] "etcd-embed-certs-789000" [356d7981-ab7a-40bf-866f-0285986f9a8d] Running
	I1205 20:35:57.956989  585113 system_pods.go:89] "kube-apiserver-embed-certs-789000" [bddc43d8-26f1-462b-a90b-8a4093bbb427] Running
	I1205 20:35:57.956992  585113 system_pods.go:89] "kube-controller-manager-embed-certs-789000" [800f92d7-e6e2-4cb8-9cc7-90595f4b512b] Running
	I1205 20:35:57.956996  585113 system_pods.go:89] "kube-proxy-znjpk" [f3df1a22-d7e0-4a83-84dd-0e710185ded6] Running
	I1205 20:35:57.956999  585113 system_pods.go:89] "kube-scheduler-embed-certs-789000" [327e3f02-3092-49fb-bfac-fc0485f02db3] Running
	I1205 20:35:57.957005  585113 system_pods.go:89] "metrics-server-6867b74b74-cs42k" [98b266c3-8ff0-4dc6-9c43-374dcd7c074a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:35:57.957010  585113 system_pods.go:89] "storage-provisioner" [2808c8da-8904-45a0-ae68-bfd68681540f] Running
	I1205 20:35:57.957019  585113 system_pods.go:126] duration metric: took 199.777723ms to wait for k8s-apps to be running ...
	I1205 20:35:57.957028  585113 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:35:57.957079  585113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:35:57.971959  585113 system_svc.go:56] duration metric: took 14.916307ms WaitForService to wait for kubelet
	I1205 20:35:57.972000  585113 kubeadm.go:582] duration metric: took 10.638693638s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:35:57.972027  585113 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:35:58.153272  585113 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:35:58.153302  585113 node_conditions.go:123] node cpu capacity is 2
	I1205 20:35:58.153323  585113 node_conditions.go:105] duration metric: took 181.282208ms to run NodePressure ...
	I1205 20:35:58.153338  585113 start.go:241] waiting for startup goroutines ...
	I1205 20:35:58.153348  585113 start.go:246] waiting for cluster config update ...
	I1205 20:35:58.153361  585113 start.go:255] writing updated cluster config ...
	I1205 20:35:58.153689  585113 ssh_runner.go:195] Run: rm -f paused
	I1205 20:35:58.206377  585113 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 20:35:58.208199  585113 out.go:177] * Done! kubectl is now configured to use "embed-certs-789000" cluster and "default" namespace by default
	I1205 20:35:56.626799  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:59.119621  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:59.164803  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:01.663254  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:01.119680  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:03.121023  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:05.121537  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:02.025194  585602 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 20:36:02.025306  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:36:02.025498  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:36:03.664172  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:05.672410  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:07.623229  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:10.119845  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:07.025608  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:36:07.025922  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:36:08.164875  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:10.665374  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:12.622566  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:15.120084  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:13.163662  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:15.164021  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:17.164514  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:17.619629  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:19.620524  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:17.026490  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:36:17.026747  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:36:19.663904  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:22.164514  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:21.621019  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:24.119524  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:24.164932  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:26.670748  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:26.119795  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:27.113870  585025 pod_ready.go:82] duration metric: took 4m0.000886242s for pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace to be "Ready" ...
	E1205 20:36:27.113920  585025 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace to be "Ready" (will not retry!)
	I1205 20:36:27.113943  585025 pod_ready.go:39] duration metric: took 4m14.547292745s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:36:27.113975  585025 kubeadm.go:597] duration metric: took 4m21.939840666s to restartPrimaryControlPlane
	W1205 20:36:27.114068  585025 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 20:36:27.114099  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:36:29.163499  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:29.664158  585929 pod_ready.go:82] duration metric: took 4m0.007168384s for pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace to be "Ready" ...
	E1205 20:36:29.664191  585929 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 20:36:29.664201  585929 pod_ready.go:39] duration metric: took 4m2.00733866s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:36:29.664226  585929 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:36:29.664290  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:36:29.664377  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:36:29.712790  585929 cri.go:89] found id: "83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:29.712814  585929 cri.go:89] found id: "e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:29.712819  585929 cri.go:89] found id: ""
	I1205 20:36:29.712826  585929 logs.go:282] 2 containers: [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36]
	I1205 20:36:29.712879  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.717751  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.721968  585929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:36:29.722045  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:36:29.770289  585929 cri.go:89] found id: "62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:29.770322  585929 cri.go:89] found id: ""
	I1205 20:36:29.770330  585929 logs.go:282] 1 containers: [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff]
	I1205 20:36:29.770392  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.775391  585929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:36:29.775475  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:36:29.816354  585929 cri.go:89] found id: "dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:29.816380  585929 cri.go:89] found id: ""
	I1205 20:36:29.816388  585929 logs.go:282] 1 containers: [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f]
	I1205 20:36:29.816454  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.821546  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:36:29.821621  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:36:29.870442  585929 cri.go:89] found id: "40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:29.870467  585929 cri.go:89] found id: ""
	I1205 20:36:29.870476  585929 logs.go:282] 1 containers: [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d]
	I1205 20:36:29.870541  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.875546  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:36:29.875658  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:36:29.924567  585929 cri.go:89] found id: "444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:29.924595  585929 cri.go:89] found id: ""
	I1205 20:36:29.924603  585929 logs.go:282] 1 containers: [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43]
	I1205 20:36:29.924666  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.929148  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:36:29.929216  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:36:29.968092  585929 cri.go:89] found id: "18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:29.968122  585929 cri.go:89] found id: "587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:29.968126  585929 cri.go:89] found id: ""
	I1205 20:36:29.968134  585929 logs.go:282] 2 containers: [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66]
	I1205 20:36:29.968186  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.973062  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.977693  585929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:36:29.977762  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:36:30.014944  585929 cri.go:89] found id: ""
	I1205 20:36:30.014982  585929 logs.go:282] 0 containers: []
	W1205 20:36:30.014994  585929 logs.go:284] No container was found matching "kindnet"
	I1205 20:36:30.015002  585929 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:36:30.015101  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:36:30.062304  585929 cri.go:89] found id: "e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:30.062328  585929 cri.go:89] found id: "dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:30.062332  585929 cri.go:89] found id: ""
	I1205 20:36:30.062339  585929 logs.go:282] 2 containers: [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c]
	I1205 20:36:30.062394  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:30.067152  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:30.071767  585929 logs.go:123] Gathering logs for kube-apiserver [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d] ...
	I1205 20:36:30.071788  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:30.125030  585929 logs.go:123] Gathering logs for etcd [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff] ...
	I1205 20:36:30.125069  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:30.167607  585929 logs.go:123] Gathering logs for kube-scheduler [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d] ...
	I1205 20:36:30.167641  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:30.217522  585929 logs.go:123] Gathering logs for kube-controller-manager [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c] ...
	I1205 20:36:30.217558  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:30.298655  585929 logs.go:123] Gathering logs for kube-controller-manager [587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66] ...
	I1205 20:36:30.298695  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:30.346687  585929 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:36:30.346721  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:36:30.887069  585929 logs.go:123] Gathering logs for dmesg ...
	I1205 20:36:30.887126  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:36:30.907313  585929 logs.go:123] Gathering logs for kube-apiserver [e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36] ...
	I1205 20:36:30.907360  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:30.950285  585929 logs.go:123] Gathering logs for coredns [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f] ...
	I1205 20:36:30.950326  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:30.990895  585929 logs.go:123] Gathering logs for storage-provisioner [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8] ...
	I1205 20:36:30.990929  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:31.032950  585929 logs.go:123] Gathering logs for kubelet ...
	I1205 20:36:31.033010  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:36:31.115132  585929 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:36:31.115176  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:36:31.257760  585929 logs.go:123] Gathering logs for kube-proxy [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43] ...
	I1205 20:36:31.257797  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:31.300521  585929 logs.go:123] Gathering logs for storage-provisioner [dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c] ...
	I1205 20:36:31.300553  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:31.338339  585929 logs.go:123] Gathering logs for container status ...
	I1205 20:36:31.338373  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:36:33.892406  585929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:36:33.908917  585929 api_server.go:72] duration metric: took 4m14.472283422s to wait for apiserver process to appear ...
	I1205 20:36:33.908950  585929 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:36:33.908993  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:36:33.909067  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:36:33.958461  585929 cri.go:89] found id: "83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:33.958496  585929 cri.go:89] found id: "e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:33.958502  585929 cri.go:89] found id: ""
	I1205 20:36:33.958511  585929 logs.go:282] 2 containers: [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36]
	I1205 20:36:33.958585  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:33.963333  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:33.969472  585929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:36:33.969549  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:36:34.010687  585929 cri.go:89] found id: "62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:34.010711  585929 cri.go:89] found id: ""
	I1205 20:36:34.010721  585929 logs.go:282] 1 containers: [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff]
	I1205 20:36:34.010790  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.016468  585929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:36:34.016557  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:36:34.056627  585929 cri.go:89] found id: "dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:34.056656  585929 cri.go:89] found id: ""
	I1205 20:36:34.056666  585929 logs.go:282] 1 containers: [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f]
	I1205 20:36:34.056729  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.061343  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:36:34.061411  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:36:34.099534  585929 cri.go:89] found id: "40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:34.099563  585929 cri.go:89] found id: ""
	I1205 20:36:34.099573  585929 logs.go:282] 1 containers: [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d]
	I1205 20:36:34.099643  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.104828  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:36:34.104891  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:36:34.150749  585929 cri.go:89] found id: "444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:34.150781  585929 cri.go:89] found id: ""
	I1205 20:36:34.150792  585929 logs.go:282] 1 containers: [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43]
	I1205 20:36:34.150863  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.155718  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:36:34.155797  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:36:34.202896  585929 cri.go:89] found id: "18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:34.202927  585929 cri.go:89] found id: "587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:34.202934  585929 cri.go:89] found id: ""
	I1205 20:36:34.202943  585929 logs.go:282] 2 containers: [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66]
	I1205 20:36:34.203028  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.207791  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.212163  585929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:36:34.212243  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:36:34.254423  585929 cri.go:89] found id: ""
	I1205 20:36:34.254458  585929 logs.go:282] 0 containers: []
	W1205 20:36:34.254470  585929 logs.go:284] No container was found matching "kindnet"
	I1205 20:36:34.254479  585929 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:36:34.254549  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:36:34.294704  585929 cri.go:89] found id: "e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:34.294737  585929 cri.go:89] found id: "dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:34.294741  585929 cri.go:89] found id: ""
	I1205 20:36:34.294753  585929 logs.go:282] 2 containers: [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c]
	I1205 20:36:34.294820  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.299361  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.305411  585929 logs.go:123] Gathering logs for kube-apiserver [e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36] ...
	I1205 20:36:34.305437  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:34.357438  585929 logs.go:123] Gathering logs for etcd [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff] ...
	I1205 20:36:34.357472  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:34.405858  585929 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:36:34.405893  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:36:34.898506  585929 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:36:34.898551  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:36:35.009818  585929 logs.go:123] Gathering logs for coredns [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f] ...
	I1205 20:36:35.009856  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:35.048852  585929 logs.go:123] Gathering logs for kube-controller-manager [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c] ...
	I1205 20:36:35.048882  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:35.100458  585929 logs.go:123] Gathering logs for kube-controller-manager [587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66] ...
	I1205 20:36:35.100511  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:35.139923  585929 logs.go:123] Gathering logs for container status ...
	I1205 20:36:35.139959  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:36:35.184818  585929 logs.go:123] Gathering logs for kubelet ...
	I1205 20:36:35.184852  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:36:35.265196  585929 logs.go:123] Gathering logs for dmesg ...
	I1205 20:36:35.265238  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:36:35.280790  585929 logs.go:123] Gathering logs for kube-proxy [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43] ...
	I1205 20:36:35.280830  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:35.323308  585929 logs.go:123] Gathering logs for storage-provisioner [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8] ...
	I1205 20:36:35.323343  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:35.364578  585929 logs.go:123] Gathering logs for kube-apiserver [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d] ...
	I1205 20:36:35.364610  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:35.411413  585929 logs.go:123] Gathering logs for kube-scheduler [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d] ...
	I1205 20:36:35.411456  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:35.458077  585929 logs.go:123] Gathering logs for storage-provisioner [dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c] ...
	I1205 20:36:35.458117  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:37.997701  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:36:38.003308  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 200:
	ok
	I1205 20:36:38.004465  585929 api_server.go:141] control plane version: v1.31.2
	I1205 20:36:38.004495  585929 api_server.go:131] duration metric: took 4.095536578s to wait for apiserver health ...
	I1205 20:36:38.004505  585929 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:36:38.004532  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:36:38.004598  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:36:37.027599  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:36:37.027910  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:36:38.048388  585929 cri.go:89] found id: "83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:38.048427  585929 cri.go:89] found id: "e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:38.048434  585929 cri.go:89] found id: ""
	I1205 20:36:38.048442  585929 logs.go:282] 2 containers: [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36]
	I1205 20:36:38.048514  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.052931  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.057338  585929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:36:38.057403  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:36:38.097715  585929 cri.go:89] found id: "62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:38.097750  585929 cri.go:89] found id: ""
	I1205 20:36:38.097761  585929 logs.go:282] 1 containers: [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff]
	I1205 20:36:38.097830  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.104038  585929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:36:38.104110  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:36:38.148485  585929 cri.go:89] found id: "dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:38.148510  585929 cri.go:89] found id: ""
	I1205 20:36:38.148519  585929 logs.go:282] 1 containers: [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f]
	I1205 20:36:38.148585  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.153619  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:36:38.153702  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:36:38.190467  585929 cri.go:89] found id: "40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:38.190495  585929 cri.go:89] found id: ""
	I1205 20:36:38.190505  585929 logs.go:282] 1 containers: [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d]
	I1205 20:36:38.190561  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.195177  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:36:38.195259  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:36:38.240020  585929 cri.go:89] found id: "444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:38.240045  585929 cri.go:89] found id: ""
	I1205 20:36:38.240054  585929 logs.go:282] 1 containers: [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43]
	I1205 20:36:38.240123  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.244359  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:36:38.244425  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:36:38.282241  585929 cri.go:89] found id: "18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:38.282267  585929 cri.go:89] found id: "587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:38.282284  585929 cri.go:89] found id: ""
	I1205 20:36:38.282292  585929 logs.go:282] 2 containers: [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66]
	I1205 20:36:38.282357  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.287437  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.291561  585929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:36:38.291621  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:36:38.333299  585929 cri.go:89] found id: ""
	I1205 20:36:38.333335  585929 logs.go:282] 0 containers: []
	W1205 20:36:38.333345  585929 logs.go:284] No container was found matching "kindnet"
	I1205 20:36:38.333352  585929 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:36:38.333411  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:36:38.370920  585929 cri.go:89] found id: "e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:38.370948  585929 cri.go:89] found id: "dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:38.370952  585929 cri.go:89] found id: ""
	I1205 20:36:38.370960  585929 logs.go:282] 2 containers: [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c]
	I1205 20:36:38.371037  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.375549  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.379517  585929 logs.go:123] Gathering logs for kube-controller-manager [587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66] ...
	I1205 20:36:38.379548  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:38.416990  585929 logs.go:123] Gathering logs for kubelet ...
	I1205 20:36:38.417023  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:36:38.499859  585929 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:36:38.499905  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:36:38.625291  585929 logs.go:123] Gathering logs for kube-scheduler [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d] ...
	I1205 20:36:38.625332  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:38.672549  585929 logs.go:123] Gathering logs for coredns [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f] ...
	I1205 20:36:38.672586  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:38.710017  585929 logs.go:123] Gathering logs for storage-provisioner [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8] ...
	I1205 20:36:38.710055  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:38.754004  585929 logs.go:123] Gathering logs for container status ...
	I1205 20:36:38.754049  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:36:38.802163  585929 logs.go:123] Gathering logs for dmesg ...
	I1205 20:36:38.802206  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:36:38.817670  585929 logs.go:123] Gathering logs for kube-apiserver [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d] ...
	I1205 20:36:38.817704  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:38.864833  585929 logs.go:123] Gathering logs for etcd [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff] ...
	I1205 20:36:38.864875  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:38.909490  585929 logs.go:123] Gathering logs for storage-provisioner [dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c] ...
	I1205 20:36:38.909526  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:38.952117  585929 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:36:38.952164  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:36:39.347620  585929 logs.go:123] Gathering logs for kube-apiserver [e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36] ...
	I1205 20:36:39.347686  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:39.392412  585929 logs.go:123] Gathering logs for kube-proxy [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43] ...
	I1205 20:36:39.392450  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:39.433711  585929 logs.go:123] Gathering logs for kube-controller-manager [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c] ...
	I1205 20:36:39.433749  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:41.996602  585929 system_pods.go:59] 8 kube-system pods found
	I1205 20:36:41.996634  585929 system_pods.go:61] "coredns-7c65d6cfc9-5drgc" [4adbcbc8-0974-4ed3-90d4-fc7f75ff83b6] Running
	I1205 20:36:41.996640  585929 system_pods.go:61] "etcd-default-k8s-diff-port-942599" [4041a965-abf4-45b3-a180-118601e72573] Running
	I1205 20:36:41.996644  585929 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-942599" [ae1d7788-4feb-4e02-b0b2-bcaff984ff99] Running
	I1205 20:36:41.996648  585929 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-942599" [5cfb734e-5a10-4066-95a1-b884817a0aea] Running
	I1205 20:36:41.996651  585929 system_pods.go:61] "kube-proxy-5vdcq" [be2e18fd-6980-45c9-87a4-f6d1ed31bf7b] Running
	I1205 20:36:41.996654  585929 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-942599" [8deda727-a6c3-4523-8755-76217f6a8ddb] Running
	I1205 20:36:41.996661  585929 system_pods.go:61] "metrics-server-6867b74b74-rq8xm" [99b577fd-fbfd-4178-8b06-ef96f118c30b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:36:41.996665  585929 system_pods.go:61] "storage-provisioner" [8a858ec2-dc10-4501-8efa-72e2ea0c7927] Running
	I1205 20:36:41.996674  585929 system_pods.go:74] duration metric: took 3.992162062s to wait for pod list to return data ...
	I1205 20:36:41.996682  585929 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:36:41.999553  585929 default_sa.go:45] found service account: "default"
	I1205 20:36:41.999580  585929 default_sa.go:55] duration metric: took 2.889197ms for default service account to be created ...
	I1205 20:36:41.999589  585929 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:36:42.005061  585929 system_pods.go:86] 8 kube-system pods found
	I1205 20:36:42.005099  585929 system_pods.go:89] "coredns-7c65d6cfc9-5drgc" [4adbcbc8-0974-4ed3-90d4-fc7f75ff83b6] Running
	I1205 20:36:42.005111  585929 system_pods.go:89] "etcd-default-k8s-diff-port-942599" [4041a965-abf4-45b3-a180-118601e72573] Running
	I1205 20:36:42.005118  585929 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-942599" [ae1d7788-4feb-4e02-b0b2-bcaff984ff99] Running
	I1205 20:36:42.005126  585929 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-942599" [5cfb734e-5a10-4066-95a1-b884817a0aea] Running
	I1205 20:36:42.005135  585929 system_pods.go:89] "kube-proxy-5vdcq" [be2e18fd-6980-45c9-87a4-f6d1ed31bf7b] Running
	I1205 20:36:42.005143  585929 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-942599" [8deda727-a6c3-4523-8755-76217f6a8ddb] Running
	I1205 20:36:42.005159  585929 system_pods.go:89] "metrics-server-6867b74b74-rq8xm" [99b577fd-fbfd-4178-8b06-ef96f118c30b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:36:42.005171  585929 system_pods.go:89] "storage-provisioner" [8a858ec2-dc10-4501-8efa-72e2ea0c7927] Running
	I1205 20:36:42.005187  585929 system_pods.go:126] duration metric: took 5.591652ms to wait for k8s-apps to be running ...
	I1205 20:36:42.005201  585929 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:36:42.005267  585929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:36:42.021323  585929 system_svc.go:56] duration metric: took 16.10852ms WaitForService to wait for kubelet
	I1205 20:36:42.021358  585929 kubeadm.go:582] duration metric: took 4m22.584731606s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:36:42.021424  585929 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:36:42.024632  585929 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:36:42.024658  585929 node_conditions.go:123] node cpu capacity is 2
	I1205 20:36:42.024682  585929 node_conditions.go:105] duration metric: took 3.248548ms to run NodePressure ...
	I1205 20:36:42.024698  585929 start.go:241] waiting for startup goroutines ...
	I1205 20:36:42.024709  585929 start.go:246] waiting for cluster config update ...
	I1205 20:36:42.024742  585929 start.go:255] writing updated cluster config ...
	I1205 20:36:42.025047  585929 ssh_runner.go:195] Run: rm -f paused
	I1205 20:36:42.077303  585929 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 20:36:42.079398  585929 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-942599" cluster and "default" namespace by default
	I1205 20:36:53.411276  585025 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.297141231s)
	I1205 20:36:53.411423  585025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:36:53.432474  585025 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:36:53.443908  585025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:36:53.454789  585025 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:36:53.454821  585025 kubeadm.go:157] found existing configuration files:
	
	I1205 20:36:53.454873  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:36:53.465648  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:36:53.465719  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:36:53.476492  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:36:53.486436  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:36:53.486505  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:36:53.499146  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:36:53.510237  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:36:53.510324  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:36:53.521186  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:36:53.531797  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:36:53.531890  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:36:53.543056  585025 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:36:53.735019  585025 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:37:01.531096  585025 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 20:37:01.531179  585025 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:37:01.531278  585025 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:37:01.531407  585025 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:37:01.531546  585025 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 20:37:01.531635  585025 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:37:01.533284  585025 out.go:235]   - Generating certificates and keys ...
	I1205 20:37:01.533400  585025 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:37:01.533484  585025 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:37:01.533589  585025 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:37:01.533676  585025 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:37:01.533741  585025 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:37:01.533820  585025 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 20:37:01.533901  585025 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:37:01.533954  585025 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:37:01.534023  585025 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:37:01.534097  585025 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:37:01.534137  585025 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 20:37:01.534193  585025 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:37:01.534264  585025 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:37:01.534347  585025 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 20:37:01.534414  585025 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:37:01.534479  585025 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:37:01.534529  585025 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:37:01.534600  585025 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:37:01.534656  585025 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:37:01.536208  585025 out.go:235]   - Booting up control plane ...
	I1205 20:37:01.536326  585025 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:37:01.536394  585025 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:37:01.536487  585025 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:37:01.536653  585025 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:37:01.536772  585025 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:37:01.536814  585025 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:37:01.536987  585025 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 20:37:01.537144  585025 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 20:37:01.537240  585025 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.640403ms
	I1205 20:37:01.537352  585025 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 20:37:01.537438  585025 kubeadm.go:310] [api-check] The API server is healthy after 5.002069704s
	I1205 20:37:01.537566  585025 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:37:01.537705  585025 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:37:01.537766  585025 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:37:01.537959  585025 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-816185 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:37:01.538037  585025 kubeadm.go:310] [bootstrap-token] Using token: l8cx4j.koqnwrdaqrc08irs
	I1205 20:37:01.539683  585025 out.go:235]   - Configuring RBAC rules ...
	I1205 20:37:01.539813  585025 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:37:01.539945  585025 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:37:01.540157  585025 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:37:01.540346  585025 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:37:01.540482  585025 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:37:01.540602  585025 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:37:01.540746  585025 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:37:01.540818  585025 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 20:37:01.540905  585025 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 20:37:01.540922  585025 kubeadm.go:310] 
	I1205 20:37:01.541012  585025 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 20:37:01.541027  585025 kubeadm.go:310] 
	I1205 20:37:01.541149  585025 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 20:37:01.541160  585025 kubeadm.go:310] 
	I1205 20:37:01.541197  585025 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 20:37:01.541253  585025 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:37:01.541297  585025 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:37:01.541303  585025 kubeadm.go:310] 
	I1205 20:37:01.541365  585025 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 20:37:01.541371  585025 kubeadm.go:310] 
	I1205 20:37:01.541417  585025 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:37:01.541427  585025 kubeadm.go:310] 
	I1205 20:37:01.541486  585025 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 20:37:01.541593  585025 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:37:01.541689  585025 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:37:01.541707  585025 kubeadm.go:310] 
	I1205 20:37:01.541811  585025 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:37:01.541917  585025 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 20:37:01.541928  585025 kubeadm.go:310] 
	I1205 20:37:01.542020  585025 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token l8cx4j.koqnwrdaqrc08irs \
	I1205 20:37:01.542138  585025 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 \
	I1205 20:37:01.542171  585025 kubeadm.go:310] 	--control-plane 
	I1205 20:37:01.542180  585025 kubeadm.go:310] 
	I1205 20:37:01.542264  585025 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:37:01.542283  585025 kubeadm.go:310] 
	I1205 20:37:01.542407  585025 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token l8cx4j.koqnwrdaqrc08irs \
	I1205 20:37:01.542513  585025 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 
	I1205 20:37:01.542530  585025 cni.go:84] Creating CNI manager for ""
	I1205 20:37:01.542538  585025 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:37:01.543967  585025 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:37:01.545652  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:37:01.557890  585025 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:37:01.577447  585025 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:37:01.577532  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-816185 minikube.k8s.io/updated_at=2024_12_05T20_37_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=no-preload-816185 minikube.k8s.io/primary=true
	I1205 20:37:01.577542  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:01.618121  585025 ops.go:34] apiserver oom_adj: -16
	I1205 20:37:01.806825  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:02.307212  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:02.807893  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:03.307202  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:03.806891  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:04.307571  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:04.807485  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:05.307695  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:05.387751  585025 kubeadm.go:1113] duration metric: took 3.810307917s to wait for elevateKubeSystemPrivileges
	I1205 20:37:05.387790  585025 kubeadm.go:394] duration metric: took 5m0.269375789s to StartCluster
	I1205 20:37:05.387810  585025 settings.go:142] acquiring lock: {Name:mk53b9e6d652790a330d8f10370186624dd74692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:37:05.387891  585025 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:37:05.389703  585025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:37:05.389984  585025 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.37 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:37:05.390056  585025 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 20:37:05.390179  585025 config.go:182] Loaded profile config "no-preload-816185": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:37:05.390193  585025 addons.go:69] Setting storage-provisioner=true in profile "no-preload-816185"
	I1205 20:37:05.390216  585025 addons.go:69] Setting default-storageclass=true in profile "no-preload-816185"
	I1205 20:37:05.390246  585025 addons.go:69] Setting metrics-server=true in profile "no-preload-816185"
	I1205 20:37:05.390281  585025 addons.go:234] Setting addon metrics-server=true in "no-preload-816185"
	W1205 20:37:05.390295  585025 addons.go:243] addon metrics-server should already be in state true
	I1205 20:37:05.390340  585025 host.go:66] Checking if "no-preload-816185" exists ...
	I1205 20:37:05.390255  585025 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-816185"
	I1205 20:37:05.390263  585025 addons.go:234] Setting addon storage-provisioner=true in "no-preload-816185"
	W1205 20:37:05.390463  585025 addons.go:243] addon storage-provisioner should already be in state true
	I1205 20:37:05.390533  585025 host.go:66] Checking if "no-preload-816185" exists ...
	I1205 20:37:05.390844  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.390888  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.390852  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.390947  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.390973  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.391032  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.391810  585025 out.go:177] * Verifying Kubernetes components...
	I1205 20:37:05.393274  585025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:37:05.408078  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40259
	I1205 20:37:05.408366  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I1205 20:37:05.408765  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.408780  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.409315  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.409337  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.409441  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.409465  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.409767  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.409800  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.409941  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetState
	I1205 20:37:05.410249  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42147
	I1205 20:37:05.410487  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.410537  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.410753  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.411387  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.411412  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.411847  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.412515  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.412565  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.413770  585025 addons.go:234] Setting addon default-storageclass=true in "no-preload-816185"
	W1205 20:37:05.413796  585025 addons.go:243] addon default-storageclass should already be in state true
	I1205 20:37:05.413828  585025 host.go:66] Checking if "no-preload-816185" exists ...
	I1205 20:37:05.414184  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.414231  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.430214  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33287
	I1205 20:37:05.430684  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.431260  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.431286  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.431697  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.431929  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetState
	I1205 20:37:05.432941  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36939
	I1205 20:37:05.433361  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.433835  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.433855  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.433933  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:37:05.434385  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.434596  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetState
	I1205 20:37:05.434638  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37163
	I1205 20:37:05.435193  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.435667  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.435694  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.435994  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.436000  585025 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:37:05.436635  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.436657  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:37:05.436683  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.437421  585025 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:37:05.437441  585025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:37:05.437461  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:37:05.438221  585025 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:37:05.439704  585025 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:37:05.439721  585025 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:37:05.439737  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:37:05.440522  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.441031  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:37:05.441058  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.441198  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:37:05.441352  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:37:05.441458  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:37:05.441582  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:37:05.445842  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.446223  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:37:05.446248  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.446449  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:37:05.446661  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:37:05.446806  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:37:05.446923  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:37:05.472870  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38029
	I1205 20:37:05.473520  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.474053  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.474080  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.474456  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.474666  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetState
	I1205 20:37:05.476603  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:37:05.476836  585025 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:37:05.476859  585025 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:37:05.476886  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:37:05.480063  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.480546  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:37:05.480580  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.480941  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:37:05.481175  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:37:05.481331  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:37:05.481425  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:37:05.607284  585025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:37:05.627090  585025 node_ready.go:35] waiting up to 6m0s for node "no-preload-816185" to be "Ready" ...
	I1205 20:37:05.637577  585025 node_ready.go:49] node "no-preload-816185" has status "Ready":"True"
	I1205 20:37:05.637602  585025 node_ready.go:38] duration metric: took 10.476209ms for node "no-preload-816185" to be "Ready" ...
	I1205 20:37:05.637611  585025 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:37:05.642969  585025 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:05.696662  585025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:37:05.725276  585025 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:37:05.725309  585025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:37:05.779102  585025 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:37:05.779137  585025 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:37:05.814495  585025 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:37:05.814531  585025 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:37:05.823828  585025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:37:05.863152  585025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:37:05.948854  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:05.948895  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:05.949242  585025 main.go:141] libmachine: (no-preload-816185) DBG | Closing plugin on server side
	I1205 20:37:05.949266  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:05.949275  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:05.949294  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:05.949302  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:05.949590  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:05.949601  585025 main.go:141] libmachine: (no-preload-816185) DBG | Closing plugin on server side
	I1205 20:37:05.949612  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:05.975655  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:05.975683  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:05.975962  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:05.975978  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:07.004027  585025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.180164032s)
	I1205 20:37:07.004103  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:07.004117  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:07.004498  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:07.004520  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:07.004535  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:07.004545  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:07.004802  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:07.004820  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:07.208032  585025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.344819218s)
	I1205 20:37:07.208143  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:07.208159  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:07.208537  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:07.208556  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:07.208566  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:07.208573  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:07.208846  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:07.208860  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:07.208871  585025 addons.go:475] Verifying addon metrics-server=true in "no-preload-816185"
	I1205 20:37:07.210487  585025 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:37:07.212093  585025 addons.go:510] duration metric: took 1.822047986s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:37:07.658678  585025 pod_ready.go:103] pod "etcd-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:37:08.156061  585025 pod_ready.go:93] pod "etcd-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:08.156094  585025 pod_ready.go:82] duration metric: took 2.513098547s for pod "etcd-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:08.156109  585025 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:10.162704  585025 pod_ready.go:103] pod "kube-apiserver-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:37:12.163550  585025 pod_ready.go:93] pod "kube-apiserver-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:12.163578  585025 pod_ready.go:82] duration metric: took 4.007461295s for pod "kube-apiserver-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:12.163601  585025 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:12.169123  585025 pod_ready.go:93] pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:12.169155  585025 pod_ready.go:82] duration metric: took 5.544964ms for pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:12.169170  585025 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:14.175288  585025 pod_ready.go:103] pod "kube-scheduler-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:37:14.676107  585025 pod_ready.go:93] pod "kube-scheduler-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:14.676137  585025 pod_ready.go:82] duration metric: took 2.506959209s for pod "kube-scheduler-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:14.676146  585025 pod_ready.go:39] duration metric: took 9.038525731s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:37:14.676165  585025 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:37:14.676222  585025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:37:14.692508  585025 api_server.go:72] duration metric: took 9.302489277s to wait for apiserver process to appear ...
	I1205 20:37:14.692540  585025 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:37:14.692562  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:37:14.697176  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 200:
	ok
	I1205 20:37:14.698320  585025 api_server.go:141] control plane version: v1.31.2
	I1205 20:37:14.698345  585025 api_server.go:131] duration metric: took 5.796971ms to wait for apiserver health ...
	I1205 20:37:14.698357  585025 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:37:14.706456  585025 system_pods.go:59] 9 kube-system pods found
	I1205 20:37:14.706503  585025 system_pods.go:61] "coredns-7c65d6cfc9-fmcnh" [fb6a91c8-af65-4fb6-af77-0a6c45d224a7] Running
	I1205 20:37:14.706512  585025 system_pods.go:61] "coredns-7c65d6cfc9-gmc2j" [2bfc0f96-5ad3-42c7-ab2c-4a29cbeab20f] Running
	I1205 20:37:14.706518  585025 system_pods.go:61] "etcd-no-preload-816185" [b647e785-c865-47d9-9215-4b92783df8f0] Running
	I1205 20:37:14.706524  585025 system_pods.go:61] "kube-apiserver-no-preload-816185" [a4d257bd-3d3b-4833-9edd-7a7f764d9482] Running
	I1205 20:37:14.706529  585025 system_pods.go:61] "kube-controller-manager-no-preload-816185" [0487e25d-77df-4ab1-81a0-18c09d1b7f60] Running
	I1205 20:37:14.706534  585025 system_pods.go:61] "kube-proxy-q8thq" [8be5b50a-e564-4d80-82c4-357db41a3c1e] Running
	I1205 20:37:14.706539  585025 system_pods.go:61] "kube-scheduler-no-preload-816185" [187898da-a8e3-4ce1-9f70-d581133bef49] Running
	I1205 20:37:14.706549  585025 system_pods.go:61] "metrics-server-6867b74b74-8vmd6" [d838e6e3-bd74-4653-9289-4f5375b03d4f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:37:14.706555  585025 system_pods.go:61] "storage-provisioner" [7f33e249-9330-428f-8feb-9f3cf44369be] Running
	I1205 20:37:14.706565  585025 system_pods.go:74] duration metric: took 8.200516ms to wait for pod list to return data ...
	I1205 20:37:14.706577  585025 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:37:14.716217  585025 default_sa.go:45] found service account: "default"
	I1205 20:37:14.716259  585025 default_sa.go:55] duration metric: took 9.664045ms for default service account to be created ...
	I1205 20:37:14.716293  585025 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:37:14.723293  585025 system_pods.go:86] 9 kube-system pods found
	I1205 20:37:14.723323  585025 system_pods.go:89] "coredns-7c65d6cfc9-fmcnh" [fb6a91c8-af65-4fb6-af77-0a6c45d224a7] Running
	I1205 20:37:14.723329  585025 system_pods.go:89] "coredns-7c65d6cfc9-gmc2j" [2bfc0f96-5ad3-42c7-ab2c-4a29cbeab20f] Running
	I1205 20:37:14.723333  585025 system_pods.go:89] "etcd-no-preload-816185" [b647e785-c865-47d9-9215-4b92783df8f0] Running
	I1205 20:37:14.723337  585025 system_pods.go:89] "kube-apiserver-no-preload-816185" [a4d257bd-3d3b-4833-9edd-7a7f764d9482] Running
	I1205 20:37:14.723342  585025 system_pods.go:89] "kube-controller-manager-no-preload-816185" [0487e25d-77df-4ab1-81a0-18c09d1b7f60] Running
	I1205 20:37:14.723346  585025 system_pods.go:89] "kube-proxy-q8thq" [8be5b50a-e564-4d80-82c4-357db41a3c1e] Running
	I1205 20:37:14.723349  585025 system_pods.go:89] "kube-scheduler-no-preload-816185" [187898da-a8e3-4ce1-9f70-d581133bef49] Running
	I1205 20:37:14.723355  585025 system_pods.go:89] "metrics-server-6867b74b74-8vmd6" [d838e6e3-bd74-4653-9289-4f5375b03d4f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:37:14.723360  585025 system_pods.go:89] "storage-provisioner" [7f33e249-9330-428f-8feb-9f3cf44369be] Running
	I1205 20:37:14.723368  585025 system_pods.go:126] duration metric: took 7.067824ms to wait for k8s-apps to be running ...
	I1205 20:37:14.723375  585025 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:37:14.723422  585025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:37:14.744142  585025 system_svc.go:56] duration metric: took 20.751867ms WaitForService to wait for kubelet
	I1205 20:37:14.744179  585025 kubeadm.go:582] duration metric: took 9.354165706s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:37:14.744200  585025 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:37:14.751985  585025 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:37:14.752026  585025 node_conditions.go:123] node cpu capacity is 2
	I1205 20:37:14.752043  585025 node_conditions.go:105] duration metric: took 7.836665ms to run NodePressure ...
	I1205 20:37:14.752069  585025 start.go:241] waiting for startup goroutines ...
	I1205 20:37:14.752081  585025 start.go:246] waiting for cluster config update ...
	I1205 20:37:14.752095  585025 start.go:255] writing updated cluster config ...
	I1205 20:37:14.752490  585025 ssh_runner.go:195] Run: rm -f paused
	I1205 20:37:14.806583  585025 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 20:37:14.808574  585025 out.go:177] * Done! kubectl is now configured to use "no-preload-816185" cluster and "default" namespace by default
	I1205 20:37:17.029681  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:37:17.029940  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:37:17.029963  585602 kubeadm.go:310] 
	I1205 20:37:17.030022  585602 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 20:37:17.030101  585602 kubeadm.go:310] 		timed out waiting for the condition
	I1205 20:37:17.030128  585602 kubeadm.go:310] 
	I1205 20:37:17.030167  585602 kubeadm.go:310] 	This error is likely caused by:
	I1205 20:37:17.030209  585602 kubeadm.go:310] 		- The kubelet is not running
	I1205 20:37:17.030353  585602 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 20:37:17.030369  585602 kubeadm.go:310] 
	I1205 20:37:17.030489  585602 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 20:37:17.030540  585602 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 20:37:17.030584  585602 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 20:37:17.030594  585602 kubeadm.go:310] 
	I1205 20:37:17.030733  585602 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 20:37:17.030843  585602 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 20:37:17.030855  585602 kubeadm.go:310] 
	I1205 20:37:17.031025  585602 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 20:37:17.031154  585602 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 20:37:17.031268  585602 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 20:37:17.031374  585602 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 20:37:17.031386  585602 kubeadm.go:310] 
	I1205 20:37:17.032368  585602 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:37:17.032493  585602 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 20:37:17.032562  585602 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1205 20:37:17.032709  585602 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 20:37:17.032762  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:37:17.518572  585602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:37:17.533868  585602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:37:17.547199  585602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:37:17.547224  585602 kubeadm.go:157] found existing configuration files:
	
	I1205 20:37:17.547272  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:37:17.556733  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:37:17.556801  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:37:17.566622  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:37:17.577044  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:37:17.577121  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:37:17.588726  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:37:17.599269  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:37:17.599346  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:37:17.609243  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:37:17.618947  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:37:17.619034  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:37:17.629228  585602 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:37:17.878785  585602 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:39:13.972213  585602 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 20:39:13.972379  585602 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1205 20:39:13.973936  585602 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 20:39:13.974035  585602 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:39:13.974150  585602 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:39:13.974251  585602 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:39:13.974341  585602 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:39:13.974404  585602 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:39:13.976164  585602 out.go:235]   - Generating certificates and keys ...
	I1205 20:39:13.976248  585602 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:39:13.976339  585602 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:39:13.976449  585602 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:39:13.976538  585602 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:39:13.976642  585602 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:39:13.976736  585602 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 20:39:13.976832  585602 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:39:13.976924  585602 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:39:13.977025  585602 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:39:13.977131  585602 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:39:13.977189  585602 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 20:39:13.977272  585602 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:39:13.977389  585602 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:39:13.977474  585602 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:39:13.977566  585602 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:39:13.977650  585602 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:39:13.977776  585602 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:39:13.977901  585602 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:39:13.977976  585602 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:39:13.978137  585602 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:39:13.979473  585602 out.go:235]   - Booting up control plane ...
	I1205 20:39:13.979581  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:39:13.979664  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:39:13.979732  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:39:13.979803  585602 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:39:13.979952  585602 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:39:13.980017  585602 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 20:39:13.980107  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.980396  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.980511  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.980744  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.980843  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.981116  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.981227  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.981439  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.981528  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.981718  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.981731  585602 kubeadm.go:310] 
	I1205 20:39:13.981773  585602 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 20:39:13.981831  585602 kubeadm.go:310] 		timed out waiting for the condition
	I1205 20:39:13.981839  585602 kubeadm.go:310] 
	I1205 20:39:13.981888  585602 kubeadm.go:310] 	This error is likely caused by:
	I1205 20:39:13.981941  585602 kubeadm.go:310] 		- The kubelet is not running
	I1205 20:39:13.982052  585602 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 20:39:13.982059  585602 kubeadm.go:310] 
	I1205 20:39:13.982144  585602 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 20:39:13.982174  585602 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 20:39:13.982208  585602 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 20:39:13.982215  585602 kubeadm.go:310] 
	I1205 20:39:13.982302  585602 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 20:39:13.982415  585602 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 20:39:13.982431  585602 kubeadm.go:310] 
	I1205 20:39:13.982540  585602 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 20:39:13.982618  585602 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 20:39:13.982701  585602 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 20:39:13.982766  585602 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 20:39:13.982839  585602 kubeadm.go:310] 
	I1205 20:39:13.982855  585602 kubeadm.go:394] duration metric: took 7m58.414377536s to StartCluster
	I1205 20:39:13.982907  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:39:13.982975  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:39:14.031730  585602 cri.go:89] found id: ""
	I1205 20:39:14.031767  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.031779  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:39:14.031791  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:39:14.031865  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:39:14.068372  585602 cri.go:89] found id: ""
	I1205 20:39:14.068420  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.068433  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:39:14.068440  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:39:14.068512  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:39:14.106807  585602 cri.go:89] found id: ""
	I1205 20:39:14.106837  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.106847  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:39:14.106856  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:39:14.106930  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:39:14.144926  585602 cri.go:89] found id: ""
	I1205 20:39:14.144952  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.144960  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:39:14.144974  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:39:14.145052  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:39:14.182712  585602 cri.go:89] found id: ""
	I1205 20:39:14.182742  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.182754  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:39:14.182762  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:39:14.182826  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:39:14.220469  585602 cri.go:89] found id: ""
	I1205 20:39:14.220505  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.220519  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:39:14.220527  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:39:14.220593  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:39:14.269791  585602 cri.go:89] found id: ""
	I1205 20:39:14.269823  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.269835  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:39:14.269842  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:39:14.269911  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:39:14.313406  585602 cri.go:89] found id: ""
	I1205 20:39:14.313439  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.313450  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:39:14.313464  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:39:14.313483  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:39:14.330488  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:39:14.330526  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:39:14.417358  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:39:14.417403  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:39:14.417421  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:39:14.530226  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:39:14.530270  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:39:14.585471  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:39:14.585512  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 20:39:14.636389  585602 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1205 20:39:14.636456  585602 out.go:270] * 
	W1205 20:39:14.636535  585602 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 20:39:14.636549  585602 out.go:270] * 
	W1205 20:39:14.637475  585602 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 20:39:14.640654  585602 out.go:201] 
	W1205 20:39:14.641873  585602 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 20:39:14.641931  585602 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 20:39:14.641975  585602 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 20:39:14.643389  585602 out.go:201] 
	
	
	==> CRI-O <==
	Dec 05 20:45:44 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:45:44.200050453Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431544200026348,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f7bbd20b-a548-415b-8fce-64b3299ffda6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:45:44 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:45:44.200736661Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7fd50609-0f25-4910-85e5-54401d7adac8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:45:44 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:45:44.200789431Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7fd50609-0f25-4910-85e5-54401d7adac8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:45:44 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:45:44.201015450Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8,PodSandboxId:49a79f66de45cea9e1efa6ed58c8c02967386692415e702a67bf9f5e3a2ba2fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430768054513041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a858ec2-dc10-4501-8efa-72e2ea0c7927,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d97709a18c36bfbbe17081d53a3fbdd5f4224e74eab9eebb89f38d8165bd1e9f,PodSandboxId:e5faf7274a4aada39bfb245947da4bdd772bd370531c3b8927378948371d55d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733430748151041816,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2fbf81a-7842-4591-9538-b64348a8ae02,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f,PodSandboxId:882812447bb3fa1d6f4d1c36bd08e1ea0095036f747002e94a355879fc625a14,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430744816765742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5drgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4adbcbc8-0974-4ed3-90d4-fc7f75ff83b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c,PodSandboxId:49a79f66de45cea9e1efa6ed58c8c02967386692415e702a67bf9f5e3a2ba2fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733430737247261935,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8a858ec2-dc10-4501-8efa-72e2ea0c7927,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43,PodSandboxId:69d443d593a980dd4197e947a91a4ac3c9464456f57e01cde405ad56c6d8b63e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733430737186440332,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5vdcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be2e18fd-6980-45c9-87a4
-f6d1ed31bf7b,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c,PodSandboxId:dee4184f6080c96bc39b7ee74a7ca430a4ad03c8b3cace04ead7a29ce8cef1c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430736908861133,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: f94d808f62dee00726331cbc4b8a924f,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff,PodSandboxId:1d0a1cb74162ffba59947b4c7683a9a397708a3563c9b3294b8288bb1b6b4924,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430728943226856,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00b7c3d53e623508b4ceb58ab9
7a9c81,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d,PodSandboxId:de2ea815e00faafd42e63b4c015b92dc9e561da13780bb50d89de21fa68474e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430717812378517,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9398e84e95c9ee06ddf16a72f8
1b61,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d,PodSandboxId:24da09f1d450b7e911e645bc450250d5fc0aca44a3d319480c9cb9c2bf687079,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430696374080434,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 304018f49d227e222ca00088ccc8b4
5b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66,PodSandboxId:dee4184f6080c96bc39b7ee74a7ca430a4ad03c8b3cace04ead7a29ce8cef1c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733430696373899130,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f94d80
8f62dee00726331cbc4b8a924f,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36,PodSandboxId:de2ea815e00faafd42e63b4c015b92dc9e561da13780bb50d89de21fa68474e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733430696339287264,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9398e84
e95c9ee06ddf16a72f81b61,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7fd50609-0f25-4910-85e5-54401d7adac8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:45:44 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:45:44.247206300Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=112abeeb-36b8-468d-a0b0-c7b97ac36580 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:45:44 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:45:44.247284229Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=112abeeb-36b8-468d-a0b0-c7b97ac36580 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:45:44 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:45:44.248394728Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0bc8f1a9-7d70-4cd7-843d-28bc8ca309e0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:45:44 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:45:44.248974942Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431544248947765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0bc8f1a9-7d70-4cd7-843d-28bc8ca309e0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:45:44 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:45:44.249842199Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d8e02667-254b-4f09-ae34-636c5905ed51 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:45:44 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:45:44.249897401Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d8e02667-254b-4f09-ae34-636c5905ed51 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:45:44 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:45:44.250126885Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8,PodSandboxId:49a79f66de45cea9e1efa6ed58c8c02967386692415e702a67bf9f5e3a2ba2fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430768054513041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a858ec2-dc10-4501-8efa-72e2ea0c7927,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d97709a18c36bfbbe17081d53a3fbdd5f4224e74eab9eebb89f38d8165bd1e9f,PodSandboxId:e5faf7274a4aada39bfb245947da4bdd772bd370531c3b8927378948371d55d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733430748151041816,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2fbf81a-7842-4591-9538-b64348a8ae02,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f,PodSandboxId:882812447bb3fa1d6f4d1c36bd08e1ea0095036f747002e94a355879fc625a14,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430744816765742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5drgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4adbcbc8-0974-4ed3-90d4-fc7f75ff83b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c,PodSandboxId:49a79f66de45cea9e1efa6ed58c8c02967386692415e702a67bf9f5e3a2ba2fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733430737247261935,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8a858ec2-dc10-4501-8efa-72e2ea0c7927,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43,PodSandboxId:69d443d593a980dd4197e947a91a4ac3c9464456f57e01cde405ad56c6d8b63e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733430737186440332,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5vdcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be2e18fd-6980-45c9-87a4
-f6d1ed31bf7b,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c,PodSandboxId:dee4184f6080c96bc39b7ee74a7ca430a4ad03c8b3cace04ead7a29ce8cef1c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430736908861133,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: f94d808f62dee00726331cbc4b8a924f,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff,PodSandboxId:1d0a1cb74162ffba59947b4c7683a9a397708a3563c9b3294b8288bb1b6b4924,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430728943226856,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00b7c3d53e623508b4ceb58ab9
7a9c81,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d,PodSandboxId:de2ea815e00faafd42e63b4c015b92dc9e561da13780bb50d89de21fa68474e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430717812378517,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9398e84e95c9ee06ddf16a72f8
1b61,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d,PodSandboxId:24da09f1d450b7e911e645bc450250d5fc0aca44a3d319480c9cb9c2bf687079,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430696374080434,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 304018f49d227e222ca00088ccc8b4
5b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66,PodSandboxId:dee4184f6080c96bc39b7ee74a7ca430a4ad03c8b3cace04ead7a29ce8cef1c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733430696373899130,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f94d80
8f62dee00726331cbc4b8a924f,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36,PodSandboxId:de2ea815e00faafd42e63b4c015b92dc9e561da13780bb50d89de21fa68474e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733430696339287264,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9398e84
e95c9ee06ddf16a72f81b61,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d8e02667-254b-4f09-ae34-636c5905ed51 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:45:44 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:45:44.300134280Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=69433cf9-30cb-4bb4-b1e1-6444441b3799 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:45:44 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:45:44.300384939Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=69433cf9-30cb-4bb4-b1e1-6444441b3799 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:45:44 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:45:44.302013741Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d60a4383-8397-4a01-86e6-6e0cae6f6d7c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:45:44 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:45:44.302420112Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431544302396840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d60a4383-8397-4a01-86e6-6e0cae6f6d7c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:45:44 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:45:44.303042693Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7fbb89cf-d08b-45c7-8ea0-20dca36f58c3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:45:44 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:45:44.303096918Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7fbb89cf-d08b-45c7-8ea0-20dca36f58c3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:45:44 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:45:44.303321905Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8,PodSandboxId:49a79f66de45cea9e1efa6ed58c8c02967386692415e702a67bf9f5e3a2ba2fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430768054513041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a858ec2-dc10-4501-8efa-72e2ea0c7927,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d97709a18c36bfbbe17081d53a3fbdd5f4224e74eab9eebb89f38d8165bd1e9f,PodSandboxId:e5faf7274a4aada39bfb245947da4bdd772bd370531c3b8927378948371d55d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733430748151041816,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2fbf81a-7842-4591-9538-b64348a8ae02,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f,PodSandboxId:882812447bb3fa1d6f4d1c36bd08e1ea0095036f747002e94a355879fc625a14,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430744816765742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5drgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4adbcbc8-0974-4ed3-90d4-fc7f75ff83b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c,PodSandboxId:49a79f66de45cea9e1efa6ed58c8c02967386692415e702a67bf9f5e3a2ba2fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733430737247261935,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8a858ec2-dc10-4501-8efa-72e2ea0c7927,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43,PodSandboxId:69d443d593a980dd4197e947a91a4ac3c9464456f57e01cde405ad56c6d8b63e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733430737186440332,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5vdcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be2e18fd-6980-45c9-87a4
-f6d1ed31bf7b,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c,PodSandboxId:dee4184f6080c96bc39b7ee74a7ca430a4ad03c8b3cace04ead7a29ce8cef1c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430736908861133,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: f94d808f62dee00726331cbc4b8a924f,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff,PodSandboxId:1d0a1cb74162ffba59947b4c7683a9a397708a3563c9b3294b8288bb1b6b4924,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430728943226856,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00b7c3d53e623508b4ceb58ab9
7a9c81,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d,PodSandboxId:de2ea815e00faafd42e63b4c015b92dc9e561da13780bb50d89de21fa68474e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430717812378517,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9398e84e95c9ee06ddf16a72f8
1b61,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d,PodSandboxId:24da09f1d450b7e911e645bc450250d5fc0aca44a3d319480c9cb9c2bf687079,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430696374080434,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 304018f49d227e222ca00088ccc8b4
5b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66,PodSandboxId:dee4184f6080c96bc39b7ee74a7ca430a4ad03c8b3cace04ead7a29ce8cef1c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733430696373899130,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f94d80
8f62dee00726331cbc4b8a924f,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36,PodSandboxId:de2ea815e00faafd42e63b4c015b92dc9e561da13780bb50d89de21fa68474e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733430696339287264,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9398e84
e95c9ee06ddf16a72f81b61,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7fbb89cf-d08b-45c7-8ea0-20dca36f58c3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:45:44 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:45:44.339455553Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=deb7946d-d4dc-4a74-8181-b9dc4e7d20e5 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:45:44 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:45:44.339530234Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=deb7946d-d4dc-4a74-8181-b9dc4e7d20e5 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:45:44 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:45:44.341267220Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eebe59a0-c541-4c23-9bdd-9c1aed6d8557 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:45:44 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:45:44.341864311Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431544341644388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eebe59a0-c541-4c23-9bdd-9c1aed6d8557 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:45:44 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:45:44.342456044Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4aafa2a2-979f-4408-9b6d-3bae8002f4e5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:45:44 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:45:44.342508192Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4aafa2a2-979f-4408-9b6d-3bae8002f4e5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:45:44 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:45:44.342791367Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8,PodSandboxId:49a79f66de45cea9e1efa6ed58c8c02967386692415e702a67bf9f5e3a2ba2fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430768054513041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a858ec2-dc10-4501-8efa-72e2ea0c7927,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d97709a18c36bfbbe17081d53a3fbdd5f4224e74eab9eebb89f38d8165bd1e9f,PodSandboxId:e5faf7274a4aada39bfb245947da4bdd772bd370531c3b8927378948371d55d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733430748151041816,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2fbf81a-7842-4591-9538-b64348a8ae02,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f,PodSandboxId:882812447bb3fa1d6f4d1c36bd08e1ea0095036f747002e94a355879fc625a14,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430744816765742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5drgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4adbcbc8-0974-4ed3-90d4-fc7f75ff83b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c,PodSandboxId:49a79f66de45cea9e1efa6ed58c8c02967386692415e702a67bf9f5e3a2ba2fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733430737247261935,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8a858ec2-dc10-4501-8efa-72e2ea0c7927,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43,PodSandboxId:69d443d593a980dd4197e947a91a4ac3c9464456f57e01cde405ad56c6d8b63e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733430737186440332,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5vdcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be2e18fd-6980-45c9-87a4
-f6d1ed31bf7b,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c,PodSandboxId:dee4184f6080c96bc39b7ee74a7ca430a4ad03c8b3cace04ead7a29ce8cef1c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430736908861133,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: f94d808f62dee00726331cbc4b8a924f,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff,PodSandboxId:1d0a1cb74162ffba59947b4c7683a9a397708a3563c9b3294b8288bb1b6b4924,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430728943226856,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00b7c3d53e623508b4ceb58ab9
7a9c81,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d,PodSandboxId:de2ea815e00faafd42e63b4c015b92dc9e561da13780bb50d89de21fa68474e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430717812378517,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9398e84e95c9ee06ddf16a72f8
1b61,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d,PodSandboxId:24da09f1d450b7e911e645bc450250d5fc0aca44a3d319480c9cb9c2bf687079,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430696374080434,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 304018f49d227e222ca00088ccc8b4
5b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66,PodSandboxId:dee4184f6080c96bc39b7ee74a7ca430a4ad03c8b3cace04ead7a29ce8cef1c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733430696373899130,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f94d80
8f62dee00726331cbc4b8a924f,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36,PodSandboxId:de2ea815e00faafd42e63b4c015b92dc9e561da13780bb50d89de21fa68474e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733430696339287264,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9398e84
e95c9ee06ddf16a72f81b61,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4aafa2a2-979f-4408-9b6d-3bae8002f4e5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e6ee28be86cb2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   49a79f66de45c       storage-provisioner
	d97709a18c36b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   e5faf7274a4aa       busybox
	dd7068872d39b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   882812447bb3f       coredns-7c65d6cfc9-5drgc
	dc7dc19930243       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   49a79f66de45c       storage-provisioner
	444227d730d01       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      13 minutes ago      Running             kube-proxy                1                   69d443d593a98       kube-proxy-5vdcq
	18e899b1e640c       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      13 minutes ago      Running             kube-controller-manager   2                   dee4184f6080c       kube-controller-manager-default-k8s-diff-port-942599
	62b61ec6f08d5       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   1d0a1cb74162f       etcd-default-k8s-diff-port-942599
	83b7cd17782f8       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      13 minutes ago      Running             kube-apiserver            2                   de2ea815e00fa       kube-apiserver-default-k8s-diff-port-942599
	40accb73a4e91       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      14 minutes ago      Running             kube-scheduler            1                   24da09f1d450b       kube-scheduler-default-k8s-diff-port-942599
	587008b58cfaa       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      14 minutes ago      Exited              kube-controller-manager   1                   dee4184f6080c       kube-controller-manager-default-k8s-diff-port-942599
	e2d9e7ffdd041       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      14 minutes ago      Exited              kube-apiserver            1                   de2ea815e00fa       kube-apiserver-default-k8s-diff-port-942599
	
	
	==> coredns [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52362 - 25366 "HINFO IN 4187734828424423246.5763596893688110444. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018753362s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-942599
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-942599
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=default-k8s-diff-port-942599
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T20_24_33_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 20:24:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-942599
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 20:45:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 20:42:49 +0000   Thu, 05 Dec 2024 20:24:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 20:42:49 +0000   Thu, 05 Dec 2024 20:24:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 20:42:49 +0000   Thu, 05 Dec 2024 20:24:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 20:42:49 +0000   Thu, 05 Dec 2024 20:32:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.96
	  Hostname:    default-k8s-diff-port-942599
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6b52175b9ca4472aab8c7300eafed722
	  System UUID:                6b52175b-9ca4-472a-ab8c-7300eafed722
	  Boot ID:                    02064eb6-f339-407a-83b1-8bd5c5670f78
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-7c65d6cfc9-5drgc                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-default-k8s-diff-port-942599                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-942599             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-942599    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-5vdcq                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-942599             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-6867b74b74-rq8xm                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         20m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-942599 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-942599 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-942599 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-942599 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-942599 event: Registered Node default-k8s-diff-port-942599 in Controller
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node default-k8s-diff-port-942599 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node default-k8s-diff-port-942599 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node default-k8s-diff-port-942599 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-942599 event: Registered Node default-k8s-diff-port-942599 in Controller
	
	
	==> dmesg <==
	[Dec 5 20:31] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055811] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.046796] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.253644] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.881602] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.638676] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.292482] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.061579] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060911] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.214640] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +0.134249] systemd-fstab-generator[680]: Ignoring "noauto" option for root device
	[  +0.330208] systemd-fstab-generator[709]: Ignoring "noauto" option for root device
	[  +4.485003] systemd-fstab-generator[803]: Ignoring "noauto" option for root device
	[  +0.060438] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.014343] systemd-fstab-generator[922]: Ignoring "noauto" option for root device
	[ +14.710623] kauditd_printk_skb: 87 callbacks suppressed
	[Dec 5 20:32] systemd-fstab-generator[1669]: Ignoring "noauto" option for root device
	[  +3.258284] kauditd_printk_skb: 63 callbacks suppressed
	[  +5.431491] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff] <==
	{"level":"info","ts":"2024-12-05T20:32:09.093350Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T20:32:09.097826Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-05T20:32:09.098264Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"46ee31ebc3aa8fe","initial-advertise-peer-urls":["https://192.168.50.96:2380"],"listen-peer-urls":["https://192.168.50.96:2380"],"advertise-client-urls":["https://192.168.50.96:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.96:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-05T20:32:09.098360Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.96:2380"}
	{"level":"info","ts":"2024-12-05T20:32:09.098421Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.96:2380"}
	{"level":"info","ts":"2024-12-05T20:32:09.099388Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-05T20:32:10.379441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46ee31ebc3aa8fe is starting a new election at term 2"}
	{"level":"info","ts":"2024-12-05T20:32:10.379497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46ee31ebc3aa8fe became pre-candidate at term 2"}
	{"level":"info","ts":"2024-12-05T20:32:10.379513Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46ee31ebc3aa8fe received MsgPreVoteResp from 46ee31ebc3aa8fe at term 2"}
	{"level":"info","ts":"2024-12-05T20:32:10.379524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46ee31ebc3aa8fe became candidate at term 3"}
	{"level":"info","ts":"2024-12-05T20:32:10.379530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46ee31ebc3aa8fe received MsgVoteResp from 46ee31ebc3aa8fe at term 3"}
	{"level":"info","ts":"2024-12-05T20:32:10.379554Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46ee31ebc3aa8fe became leader at term 3"}
	{"level":"info","ts":"2024-12-05T20:32:10.379561Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 46ee31ebc3aa8fe elected leader 46ee31ebc3aa8fe at term 3"}
	{"level":"info","ts":"2024-12-05T20:32:10.381989Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"46ee31ebc3aa8fe","local-member-attributes":"{Name:default-k8s-diff-port-942599 ClientURLs:[https://192.168.50.96:2379]}","request-path":"/0/members/46ee31ebc3aa8fe/attributes","cluster-id":"fa78aab20fdf43c2","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-05T20:32:10.382011Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T20:32:10.382130Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T20:32:10.382513Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-05T20:32:10.382543Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-05T20:32:10.383231Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T20:32:10.383364Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T20:32:10.384210Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.96:2379"}
	{"level":"info","ts":"2024-12-05T20:32:10.384318Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-05T20:42:14.564874Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":859}
	{"level":"info","ts":"2024-12-05T20:42:14.575119Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":859,"took":"9.558885ms","hash":4236800835,"current-db-size-bytes":2707456,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2707456,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-12-05T20:42:14.575235Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4236800835,"revision":859,"compact-revision":-1}
	
	
	==> kernel <==
	 20:45:44 up 14 min,  0 users,  load average: 0.17, 0.23, 0.15
	Linux default-k8s-diff-port-942599 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d] <==
	W1205 20:42:17.109074       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 20:42:17.109134       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1205 20:42:17.110281       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 20:42:17.110392       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 20:43:17.111443       1 handler_proxy.go:99] no RequestInfo found in the context
	W1205 20:43:17.111517       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 20:43:17.111834       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1205 20:43:17.112060       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1205 20:43:17.113225       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 20:43:17.113298       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 20:45:17.114399       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 20:45:17.114790       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1205 20:45:17.114887       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 20:45:17.114984       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1205 20:45:17.115967       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 20:45:17.116050       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36] <==
	I1205 20:31:36.801930       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1205 20:31:37.580134       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:37.580296       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1205 20:31:37.580379       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I1205 20:31:37.584309       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1205 20:31:37.587959       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1205 20:31:37.588044       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1205 20:31:37.588283       1 instance.go:232] Using reconciler: lease
	W1205 20:31:37.589512       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:38.580972       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:38.580994       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:38.590030       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:39.877297       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:40.027038       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:40.130312       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:42.059538       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:42.857959       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:43.077850       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:46.474142       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:46.477812       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:46.504215       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:51.854440       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:52.797914       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:53.778314       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F1205 20:31:57.589920       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c] <==
	E1205 20:40:20.210502       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:40:20.704030       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:40:50.217502       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:40:50.713610       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:41:20.224269       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:41:20.723187       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:41:50.232430       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:41:50.732224       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:42:20.239534       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:42:20.742998       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 20:42:49.681748       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-942599"
	E1205 20:42:50.249612       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:42:50.753646       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:43:20.256802       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:43:20.762365       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 20:43:37.732368       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="370.332µs"
	E1205 20:43:50.263366       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:43:50.770343       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 20:43:51.727554       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="120.981µs"
	E1205 20:44:20.271261       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:44:20.780532       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:44:50.280410       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:44:50.789866       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:45:20.287173       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:45:20.798099       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-controller-manager [587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66] <==
	I1205 20:31:37.332191       1 serving.go:386] Generated self-signed cert in-memory
	I1205 20:31:37.781459       1 controllermanager.go:197] "Starting" version="v1.31.2"
	I1205 20:31:37.781551       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:31:37.783214       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1205 20:31:37.783420       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1205 20:31:37.783455       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1205 20:31:37.783466       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1205 20:32:16.001632       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 20:32:17.655633       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1205 20:32:17.672264       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.96"]
	E1205 20:32:17.672358       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 20:32:17.736488       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 20:32:17.736543       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 20:32:17.736592       1 server_linux.go:169] "Using iptables Proxier"
	I1205 20:32:17.741584       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 20:32:17.742158       1 server.go:483] "Version info" version="v1.31.2"
	I1205 20:32:17.742193       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:32:17.744961       1 config.go:199] "Starting service config controller"
	I1205 20:32:17.745053       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 20:32:17.745135       1 config.go:105] "Starting endpoint slice config controller"
	I1205 20:32:17.745157       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 20:32:17.746428       1 config.go:328] "Starting node config controller"
	I1205 20:32:17.746458       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 20:32:17.881352       1 shared_informer.go:320] Caches are synced for node config
	I1205 20:32:17.892549       1 shared_informer.go:320] Caches are synced for service config
	I1205 20:32:17.900782       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d] <==
	W1205 20:32:16.012392       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 20:32:16.012938       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1205 20:32:16.013153       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 20:32:16.013197       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:32:16.013270       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 20:32:16.013287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:32:16.014838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 20:32:16.014897       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:32:16.014973       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 20:32:16.015005       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:32:16.015110       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 20:32:16.015145       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:32:16.015275       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 20:32:16.015318       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:32:16.015384       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 20:32:16.015415       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:32:16.015483       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1205 20:32:16.015770       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:32:16.015854       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 20:32:16.015887       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1205 20:32:16.015926       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1205 20:32:16.015956       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:32:16.022121       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 20:32:16.031952       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1205 20:32:17.703822       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 05 20:44:35 default-k8s-diff-port-942599 kubelet[929]: E1205 20:44:35.936870     929 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431475936069078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:44:35 default-k8s-diff-port-942599 kubelet[929]: E1205 20:44:35.936914     929 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431475936069078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:44:41 default-k8s-diff-port-942599 kubelet[929]: E1205 20:44:41.712479     929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rq8xm" podUID="99b577fd-fbfd-4178-8b06-ef96f118c30b"
	Dec 05 20:44:45 default-k8s-diff-port-942599 kubelet[929]: E1205 20:44:45.938594     929 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431485938161841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:44:45 default-k8s-diff-port-942599 kubelet[929]: E1205 20:44:45.938652     929 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431485938161841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:44:55 default-k8s-diff-port-942599 kubelet[929]: E1205 20:44:55.714924     929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rq8xm" podUID="99b577fd-fbfd-4178-8b06-ef96f118c30b"
	Dec 05 20:44:55 default-k8s-diff-port-942599 kubelet[929]: E1205 20:44:55.940344     929 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431495939861671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:44:55 default-k8s-diff-port-942599 kubelet[929]: E1205 20:44:55.940427     929 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431495939861671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:45:05 default-k8s-diff-port-942599 kubelet[929]: E1205 20:45:05.941653     929 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431505941305640,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:45:05 default-k8s-diff-port-942599 kubelet[929]: E1205 20:45:05.941998     929 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431505941305640,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:45:08 default-k8s-diff-port-942599 kubelet[929]: E1205 20:45:08.711764     929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rq8xm" podUID="99b577fd-fbfd-4178-8b06-ef96f118c30b"
	Dec 05 20:45:15 default-k8s-diff-port-942599 kubelet[929]: E1205 20:45:15.943198     929 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431515942909341,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:45:15 default-k8s-diff-port-942599 kubelet[929]: E1205 20:45:15.943220     929 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431515942909341,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:45:20 default-k8s-diff-port-942599 kubelet[929]: E1205 20:45:20.711489     929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rq8xm" podUID="99b577fd-fbfd-4178-8b06-ef96f118c30b"
	Dec 05 20:45:25 default-k8s-diff-port-942599 kubelet[929]: E1205 20:45:25.945615     929 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431525945042045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:45:25 default-k8s-diff-port-942599 kubelet[929]: E1205 20:45:25.945637     929 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431525945042045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:45:33 default-k8s-diff-port-942599 kubelet[929]: E1205 20:45:33.712248     929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rq8xm" podUID="99b577fd-fbfd-4178-8b06-ef96f118c30b"
	Dec 05 20:45:35 default-k8s-diff-port-942599 kubelet[929]: E1205 20:45:35.750076     929 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 05 20:45:35 default-k8s-diff-port-942599 kubelet[929]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 05 20:45:35 default-k8s-diff-port-942599 kubelet[929]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 20:45:35 default-k8s-diff-port-942599 kubelet[929]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 20:45:35 default-k8s-diff-port-942599 kubelet[929]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 20:45:35 default-k8s-diff-port-942599 kubelet[929]: E1205 20:45:35.947275     929 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431535947003834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:45:35 default-k8s-diff-port-942599 kubelet[929]: E1205 20:45:35.947321     929 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431535947003834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:45:44 default-k8s-diff-port-942599 kubelet[929]: E1205 20:45:44.712852     929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rq8xm" podUID="99b577fd-fbfd-4178-8b06-ef96f118c30b"
	
	
	==> storage-provisioner [dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c] <==
	I1205 20:32:17.568449       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1205 20:32:47.574458       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8] <==
	I1205 20:32:48.152193       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 20:32:48.165620       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 20:32:48.165819       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 20:33:05.570406       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 20:33:05.571267       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-942599_356d0fb9-7c51-4de0-b490-dd4f2f392b16!
	I1205 20:33:05.574264       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a1be7b79-7151-4907-8d26-e24030f7bb58", APIVersion:"v1", ResourceVersion:"639", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-942599_356d0fb9-7c51-4de0-b490-dd4f2f392b16 became leader
	I1205 20:33:05.672523       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-942599_356d0fb9-7c51-4de0-b490-dd4f2f392b16!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-942599 -n default-k8s-diff-port-942599
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-942599 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-rq8xm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-942599 describe pod metrics-server-6867b74b74-rq8xm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-942599 describe pod metrics-server-6867b74b74-rq8xm: exit status 1 (67.538611ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-rq8xm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-942599 describe pod metrics-server-6867b74b74-rq8xm: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1205 20:38:15.012771  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:38:54.456748  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-816185 -n no-preload-816185
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-12-05 20:46:15.361225382 +0000 UTC m=+6267.566845708
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-816185 -n no-preload-816185
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-816185 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-816185 logs -n 25: (2.198472967s)
E1205 20:46:18.083973  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-790679 -- sudo                         | cert-options-790679          | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:21 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-790679                                 | cert-options-790679          | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:21 UTC |
	| start   | -p no-preload-816185                                   | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-886958                           | kubernetes-upgrade-886958    | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:21 UTC |
	| start   | -p embed-certs-789000                                  | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-816185             | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-816185                                   | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-789000            | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-789000                                  | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-315387                              | cert-expiration-315387       | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-315387                              | cert-expiration-315387       | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	| delete  | -p                                                     | disable-driver-mounts-242147 | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	|         | disable-driver-mounts-242147                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:25 UTC |
	|         | default-k8s-diff-port-942599                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-386085        | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-942599  | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC | 05 Dec 24 20:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC |                     |
	|         | default-k8s-diff-port-942599                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-816185                  | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-789000                 | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-816185                                   | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC | 05 Dec 24 20:37 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-789000                                  | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC | 05 Dec 24 20:35 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-386085                              | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-386085             | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-386085                              | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-942599       | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:28 UTC | 05 Dec 24 20:36 UTC |
	|         | default-k8s-diff-port-942599                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 20:28:03
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:28:03.038037  585929 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:28:03.038168  585929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:28:03.038178  585929 out.go:358] Setting ErrFile to fd 2...
	I1205 20:28:03.038185  585929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:28:03.038375  585929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 20:28:03.038955  585929 out.go:352] Setting JSON to false
	I1205 20:28:03.039948  585929 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":11429,"bootTime":1733419054,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:28:03.040015  585929 start.go:139] virtualization: kvm guest
	I1205 20:28:03.042326  585929 out.go:177] * [default-k8s-diff-port-942599] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:28:03.044291  585929 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 20:28:03.044320  585929 notify.go:220] Checking for updates...
	I1205 20:28:03.047072  585929 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:28:03.048480  585929 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:28:03.049796  585929 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 20:28:03.051035  585929 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:28:03.052263  585929 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:28:03.054167  585929 config.go:182] Loaded profile config "default-k8s-diff-port-942599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:28:03.054665  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:28:03.054749  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:28:03.070361  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33501
	I1205 20:28:03.070891  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:28:03.071534  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:28:03.071563  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:28:03.071995  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:28:03.072285  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:28:03.072587  585929 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:28:03.072920  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:28:03.072968  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:28:03.088186  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38669
	I1205 20:28:03.088660  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:28:03.089202  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:28:03.089224  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:28:03.089542  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:28:03.089782  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:28:03.122562  585929 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 20:28:03.123970  585929 start.go:297] selected driver: kvm2
	I1205 20:28:03.123992  585929 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-942599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-942599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.96 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:28:03.124128  585929 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:28:03.125014  585929 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:28:03.125111  585929 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20052-530897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:28:03.140461  585929 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 20:28:03.140904  585929 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:28:03.140943  585929 cni.go:84] Creating CNI manager for ""
	I1205 20:28:03.141015  585929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:28:03.141067  585929 start.go:340] cluster config:
	{Name:default-k8s-diff-port-942599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-942599 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.96 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:28:03.141179  585929 iso.go:125] acquiring lock: {Name:mk778929df466edaca8cb6d38427acedfae32b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:28:03.144215  585929 out.go:177] * Starting "default-k8s-diff-port-942599" primary control-plane node in "default-k8s-diff-port-942599" cluster
	I1205 20:28:03.276565  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:03.145620  585929 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:28:03.145661  585929 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 20:28:03.145676  585929 cache.go:56] Caching tarball of preloaded images
	I1205 20:28:03.145844  585929 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:28:03.145864  585929 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 20:28:03.146005  585929 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/config.json ...
	I1205 20:28:03.146240  585929 start.go:360] acquireMachinesLock for default-k8s-diff-port-942599: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:28:06.348547  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:12.428620  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:15.500614  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:21.580587  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:24.652618  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:30.732598  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:33.804612  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:39.884624  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:42.956577  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:49.036617  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:52.108607  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:58.188605  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:01.260573  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:07.340591  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:10.412578  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:16.492574  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:19.564578  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:25.644591  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:28.716619  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:34.796609  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:37.868605  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:43.948594  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:47.020553  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:53.100499  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:56.172560  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:02.252612  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:05.324648  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:11.404563  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:14.476553  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:20.556568  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:23.561620  585113 start.go:364] duration metric: took 4m32.790399884s to acquireMachinesLock for "embed-certs-789000"
	I1205 20:30:23.561696  585113 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:30:23.561711  585113 fix.go:54] fixHost starting: 
	I1205 20:30:23.562327  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:30:23.562400  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:30:23.578260  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38555
	I1205 20:30:23.578843  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:30:23.579379  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:30:23.579405  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:30:23.579776  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:30:23.580051  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:23.580222  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetState
	I1205 20:30:23.582161  585113 fix.go:112] recreateIfNeeded on embed-certs-789000: state=Stopped err=<nil>
	I1205 20:30:23.582190  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	W1205 20:30:23.582386  585113 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 20:30:23.584585  585113 out.go:177] * Restarting existing kvm2 VM for "embed-certs-789000" ...
	I1205 20:30:23.586583  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Start
	I1205 20:30:23.586835  585113 main.go:141] libmachine: (embed-certs-789000) Ensuring networks are active...
	I1205 20:30:23.587628  585113 main.go:141] libmachine: (embed-certs-789000) Ensuring network default is active
	I1205 20:30:23.587937  585113 main.go:141] libmachine: (embed-certs-789000) Ensuring network mk-embed-certs-789000 is active
	I1205 20:30:23.588228  585113 main.go:141] libmachine: (embed-certs-789000) Getting domain xml...
	I1205 20:30:23.588898  585113 main.go:141] libmachine: (embed-certs-789000) Creating domain...
	I1205 20:30:24.829936  585113 main.go:141] libmachine: (embed-certs-789000) Waiting to get IP...
	I1205 20:30:24.830897  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:24.831398  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:24.831465  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:24.831364  586433 retry.go:31] will retry after 208.795355ms: waiting for machine to come up
	I1205 20:30:25.042078  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:25.042657  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:25.042689  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:25.042599  586433 retry.go:31] will retry after 385.313968ms: waiting for machine to come up
	I1205 20:30:25.429439  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:25.429877  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:25.429913  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:25.429811  586433 retry.go:31] will retry after 432.591358ms: waiting for machine to come up
	I1205 20:30:23.558453  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:30:23.558508  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetMachineName
	I1205 20:30:23.558905  585025 buildroot.go:166] provisioning hostname "no-preload-816185"
	I1205 20:30:23.558943  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetMachineName
	I1205 20:30:23.559166  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:30:23.561471  585025 machine.go:96] duration metric: took 4m37.380964872s to provisionDockerMachine
	I1205 20:30:23.561518  585025 fix.go:56] duration metric: took 4m37.403172024s for fixHost
	I1205 20:30:23.561524  585025 start.go:83] releasing machines lock for "no-preload-816185", held for 4m37.40319095s
	W1205 20:30:23.561546  585025 start.go:714] error starting host: provision: host is not running
	W1205 20:30:23.561677  585025 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1205 20:30:23.561688  585025 start.go:729] Will try again in 5 seconds ...
	I1205 20:30:25.864656  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:25.865217  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:25.865255  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:25.865138  586433 retry.go:31] will retry after 571.148349ms: waiting for machine to come up
	I1205 20:30:26.437644  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:26.438220  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:26.438250  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:26.438165  586433 retry.go:31] will retry after 585.234455ms: waiting for machine to come up
	I1205 20:30:27.025107  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:27.025510  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:27.025538  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:27.025459  586433 retry.go:31] will retry after 648.291531ms: waiting for machine to come up
	I1205 20:30:27.675457  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:27.675898  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:27.675928  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:27.675838  586433 retry.go:31] will retry after 804.071148ms: waiting for machine to come up
	I1205 20:30:28.481966  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:28.482386  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:28.482416  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:28.482329  586433 retry.go:31] will retry after 905.207403ms: waiting for machine to come up
	I1205 20:30:29.388933  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:29.389546  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:29.389571  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:29.389484  586433 retry.go:31] will retry after 1.48894232s: waiting for machine to come up
	I1205 20:30:28.562678  585025 start.go:360] acquireMachinesLock for no-preload-816185: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:30:30.880218  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:30.880742  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:30.880773  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:30.880685  586433 retry.go:31] will retry after 2.314200549s: waiting for machine to come up
	I1205 20:30:33.198477  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:33.198998  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:33.199029  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:33.198945  586433 retry.go:31] will retry after 1.922541264s: waiting for machine to come up
	I1205 20:30:35.123922  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:35.124579  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:35.124607  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:35.124524  586433 retry.go:31] will retry after 3.537087912s: waiting for machine to come up
	I1205 20:30:38.662839  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:38.663212  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:38.663250  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:38.663160  586433 retry.go:31] will retry after 3.371938424s: waiting for machine to come up
	I1205 20:30:43.457332  585602 start.go:364] duration metric: took 3m31.488905557s to acquireMachinesLock for "old-k8s-version-386085"
	I1205 20:30:43.457418  585602 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:30:43.457427  585602 fix.go:54] fixHost starting: 
	I1205 20:30:43.457835  585602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:30:43.457891  585602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:30:43.474845  585602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33571
	I1205 20:30:43.475386  585602 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:30:43.475993  585602 main.go:141] libmachine: Using API Version  1
	I1205 20:30:43.476026  585602 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:30:43.476404  585602 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:30:43.476613  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:30:43.476778  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetState
	I1205 20:30:43.478300  585602 fix.go:112] recreateIfNeeded on old-k8s-version-386085: state=Stopped err=<nil>
	I1205 20:30:43.478329  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	W1205 20:30:43.478502  585602 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 20:30:43.480644  585602 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-386085" ...
	I1205 20:30:42.038738  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.039204  585113 main.go:141] libmachine: (embed-certs-789000) Found IP for machine: 192.168.39.200
	I1205 20:30:42.039235  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has current primary IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.039244  585113 main.go:141] libmachine: (embed-certs-789000) Reserving static IP address...
	I1205 20:30:42.039760  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "embed-certs-789000", mac: "52:54:00:48:ae:b2", ip: "192.168.39.200"} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.039806  585113 main.go:141] libmachine: (embed-certs-789000) DBG | skip adding static IP to network mk-embed-certs-789000 - found existing host DHCP lease matching {name: "embed-certs-789000", mac: "52:54:00:48:ae:b2", ip: "192.168.39.200"}
	I1205 20:30:42.039819  585113 main.go:141] libmachine: (embed-certs-789000) Reserved static IP address: 192.168.39.200
	I1205 20:30:42.039835  585113 main.go:141] libmachine: (embed-certs-789000) Waiting for SSH to be available...
	I1205 20:30:42.039843  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Getting to WaitForSSH function...
	I1205 20:30:42.042013  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.042352  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.042386  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.042542  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Using SSH client type: external
	I1205 20:30:42.042562  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa (-rw-------)
	I1205 20:30:42.042586  585113 main.go:141] libmachine: (embed-certs-789000) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.200 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:30:42.042595  585113 main.go:141] libmachine: (embed-certs-789000) DBG | About to run SSH command:
	I1205 20:30:42.042603  585113 main.go:141] libmachine: (embed-certs-789000) DBG | exit 0
	I1205 20:30:42.168573  585113 main.go:141] libmachine: (embed-certs-789000) DBG | SSH cmd err, output: <nil>: 
	I1205 20:30:42.168960  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetConfigRaw
	I1205 20:30:42.169783  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetIP
	I1205 20:30:42.172396  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.172790  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.172818  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.173023  585113 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/config.json ...
	I1205 20:30:42.173214  585113 machine.go:93] provisionDockerMachine start ...
	I1205 20:30:42.173234  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:42.173465  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.175399  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.175754  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.175785  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.175885  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:42.176063  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.176208  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.176412  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:42.176583  585113 main.go:141] libmachine: Using SSH client type: native
	I1205 20:30:42.176816  585113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1205 20:30:42.176830  585113 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:30:42.280829  585113 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 20:30:42.280861  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetMachineName
	I1205 20:30:42.281135  585113 buildroot.go:166] provisioning hostname "embed-certs-789000"
	I1205 20:30:42.281168  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetMachineName
	I1205 20:30:42.281409  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.284355  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.284692  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.284723  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.284817  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:42.285019  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.285185  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.285338  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:42.285511  585113 main.go:141] libmachine: Using SSH client type: native
	I1205 20:30:42.285716  585113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1205 20:30:42.285730  585113 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-789000 && echo "embed-certs-789000" | sudo tee /etc/hostname
	I1205 20:30:42.409310  585113 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-789000
	
	I1205 20:30:42.409370  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.412182  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.412524  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.412566  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.412779  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:42.412989  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.413137  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.413278  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:42.413468  585113 main.go:141] libmachine: Using SSH client type: native
	I1205 20:30:42.413674  585113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1205 20:30:42.413690  585113 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-789000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-789000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-789000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:30:42.529773  585113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:30:42.529806  585113 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:30:42.529829  585113 buildroot.go:174] setting up certificates
	I1205 20:30:42.529841  585113 provision.go:84] configureAuth start
	I1205 20:30:42.529850  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetMachineName
	I1205 20:30:42.530201  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetIP
	I1205 20:30:42.533115  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.533527  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.533558  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.533753  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.535921  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.536310  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.536339  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.536518  585113 provision.go:143] copyHostCerts
	I1205 20:30:42.536610  585113 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:30:42.536631  585113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:30:42.536698  585113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:30:42.536793  585113 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:30:42.536802  585113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:30:42.536826  585113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:30:42.536880  585113 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:30:42.536887  585113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:30:42.536908  585113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:30:42.536956  585113 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.embed-certs-789000 san=[127.0.0.1 192.168.39.200 embed-certs-789000 localhost minikube]
	I1205 20:30:42.832543  585113 provision.go:177] copyRemoteCerts
	I1205 20:30:42.832610  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:30:42.832640  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.835403  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.835669  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.835701  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.835848  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:42.836027  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.836161  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:42.836314  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:30:42.918661  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:30:42.943903  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1205 20:30:42.968233  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:30:42.993174  585113 provision.go:87] duration metric: took 463.317149ms to configureAuth
	I1205 20:30:42.993249  585113 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:30:42.993449  585113 config.go:182] Loaded profile config "embed-certs-789000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:30:42.993554  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.996211  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.996637  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.996696  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.996841  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:42.997049  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.997196  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.997305  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:42.997458  585113 main.go:141] libmachine: Using SSH client type: native
	I1205 20:30:42.997641  585113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1205 20:30:42.997656  585113 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:30:43.220096  585113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:30:43.220127  585113 machine.go:96] duration metric: took 1.046899757s to provisionDockerMachine
	I1205 20:30:43.220141  585113 start.go:293] postStartSetup for "embed-certs-789000" (driver="kvm2")
	I1205 20:30:43.220152  585113 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:30:43.220176  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:43.220544  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:30:43.220584  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:43.223481  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.223860  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:43.223889  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.224102  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:43.224316  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:43.224483  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:43.224667  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:30:43.307878  585113 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:30:43.312875  585113 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:30:43.312905  585113 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:30:43.312981  585113 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:30:43.313058  585113 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:30:43.313169  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:30:43.323221  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:30:43.347978  585113 start.go:296] duration metric: took 127.819083ms for postStartSetup
	I1205 20:30:43.348023  585113 fix.go:56] duration metric: took 19.786318897s for fixHost
	I1205 20:30:43.348046  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:43.350639  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.351004  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:43.351026  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.351247  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:43.351478  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:43.351642  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:43.351803  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:43.351950  585113 main.go:141] libmachine: Using SSH client type: native
	I1205 20:30:43.352122  585113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1205 20:30:43.352133  585113 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:30:43.457130  585113 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430643.415370749
	
	I1205 20:30:43.457164  585113 fix.go:216] guest clock: 1733430643.415370749
	I1205 20:30:43.457176  585113 fix.go:229] Guest: 2024-12-05 20:30:43.415370749 +0000 UTC Remote: 2024-12-05 20:30:43.34802793 +0000 UTC m=+292.733798952 (delta=67.342819ms)
	I1205 20:30:43.457209  585113 fix.go:200] guest clock delta is within tolerance: 67.342819ms
	I1205 20:30:43.457217  585113 start.go:83] releasing machines lock for "embed-certs-789000", held for 19.895543311s
	I1205 20:30:43.457251  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:43.457563  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetIP
	I1205 20:30:43.460628  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.461002  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:43.461042  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.461175  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:43.461758  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:43.461937  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:43.462067  585113 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:30:43.462120  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:43.462147  585113 ssh_runner.go:195] Run: cat /version.json
	I1205 20:30:43.462169  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:43.464859  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.465147  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.465237  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:43.465264  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.465409  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:43.465472  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:43.465497  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.465589  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:43.465711  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:43.465768  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:43.465863  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:43.465907  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:30:43.466006  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:43.466129  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:30:43.568909  585113 ssh_runner.go:195] Run: systemctl --version
	I1205 20:30:43.575175  585113 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:30:43.725214  585113 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:30:43.732226  585113 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:30:43.732369  585113 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:30:43.750186  585113 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:30:43.750223  585113 start.go:495] detecting cgroup driver to use...
	I1205 20:30:43.750296  585113 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:30:43.767876  585113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:30:43.783386  585113 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:30:43.783465  585113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:30:43.799917  585113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:30:43.815607  585113 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:30:43.935150  585113 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:30:44.094292  585113 docker.go:233] disabling docker service ...
	I1205 20:30:44.094378  585113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:30:44.111307  585113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:30:44.127528  585113 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:30:44.284496  585113 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:30:44.422961  585113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:30:44.439104  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:30:44.461721  585113 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:30:44.461787  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.476398  585113 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:30:44.476463  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.489821  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.502250  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.514245  585113 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:30:44.528227  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.540205  585113 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.559447  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.571434  585113 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:30:44.583635  585113 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:30:44.583717  585113 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:30:44.600954  585113 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:30:44.613381  585113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:30:44.733592  585113 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:30:44.843948  585113 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:30:44.844036  585113 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:30:44.849215  585113 start.go:563] Will wait 60s for crictl version
	I1205 20:30:44.849275  585113 ssh_runner.go:195] Run: which crictl
	I1205 20:30:44.853481  585113 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:30:44.900488  585113 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:30:44.900583  585113 ssh_runner.go:195] Run: crio --version
	I1205 20:30:44.944771  585113 ssh_runner.go:195] Run: crio --version
	I1205 20:30:44.977119  585113 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:30:44.978527  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetIP
	I1205 20:30:44.981609  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:44.982001  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:44.982037  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:44.982240  585113 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:30:44.986979  585113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:30:45.001779  585113 kubeadm.go:883] updating cluster {Name:embed-certs-789000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-789000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:30:45.001935  585113 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:30:45.002021  585113 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:30:45.041827  585113 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 20:30:45.041918  585113 ssh_runner.go:195] Run: which lz4
	I1205 20:30:45.046336  585113 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 20:30:45.050804  585113 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:30:45.050852  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 20:30:43.482307  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .Start
	I1205 20:30:43.482501  585602 main.go:141] libmachine: (old-k8s-version-386085) Ensuring networks are active...
	I1205 20:30:43.483222  585602 main.go:141] libmachine: (old-k8s-version-386085) Ensuring network default is active
	I1205 20:30:43.483574  585602 main.go:141] libmachine: (old-k8s-version-386085) Ensuring network mk-old-k8s-version-386085 is active
	I1205 20:30:43.484156  585602 main.go:141] libmachine: (old-k8s-version-386085) Getting domain xml...
	I1205 20:30:43.485045  585602 main.go:141] libmachine: (old-k8s-version-386085) Creating domain...
	I1205 20:30:44.770817  585602 main.go:141] libmachine: (old-k8s-version-386085) Waiting to get IP...
	I1205 20:30:44.772079  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:44.772538  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:44.772599  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:44.772517  586577 retry.go:31] will retry after 247.056435ms: waiting for machine to come up
	I1205 20:30:45.021096  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:45.021642  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:45.021678  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:45.021560  586577 retry.go:31] will retry after 241.543543ms: waiting for machine to come up
	I1205 20:30:45.265136  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:45.265654  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:45.265683  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:45.265596  586577 retry.go:31] will retry after 324.624293ms: waiting for machine to come up
	I1205 20:30:45.592067  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:45.592603  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:45.592636  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:45.592558  586577 retry.go:31] will retry after 408.275958ms: waiting for machine to come up
	I1205 20:30:46.002321  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:46.002872  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:46.002904  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:46.002808  586577 retry.go:31] will retry after 693.356488ms: waiting for machine to come up
	I1205 20:30:46.697505  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:46.697874  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:46.697900  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:46.697846  586577 retry.go:31] will retry after 906.807324ms: waiting for machine to come up
	I1205 20:30:46.612504  585113 crio.go:462] duration metric: took 1.56620974s to copy over tarball
	I1205 20:30:46.612585  585113 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:30:48.868826  585113 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256202653s)
	I1205 20:30:48.868863  585113 crio.go:469] duration metric: took 2.256329112s to extract the tarball
	I1205 20:30:48.868873  585113 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:30:48.906872  585113 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:30:48.955442  585113 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:30:48.955468  585113 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:30:48.955477  585113 kubeadm.go:934] updating node { 192.168.39.200 8443 v1.31.2 crio true true} ...
	I1205 20:30:48.955603  585113 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-789000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-789000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:30:48.955668  585113 ssh_runner.go:195] Run: crio config
	I1205 20:30:49.007389  585113 cni.go:84] Creating CNI manager for ""
	I1205 20:30:49.007419  585113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:30:49.007433  585113 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:30:49.007473  585113 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.200 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-789000 NodeName:embed-certs-789000 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:30:49.007656  585113 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-789000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.200"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.200"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:30:49.007734  585113 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:30:49.021862  585113 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:30:49.021949  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:30:49.032937  585113 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1205 20:30:49.053311  585113 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:30:49.073636  585113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1205 20:30:49.094437  585113 ssh_runner.go:195] Run: grep 192.168.39.200	control-plane.minikube.internal$ /etc/hosts
	I1205 20:30:49.098470  585113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.200	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:30:49.112013  585113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:30:49.246312  585113 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:30:49.264250  585113 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000 for IP: 192.168.39.200
	I1205 20:30:49.264301  585113 certs.go:194] generating shared ca certs ...
	I1205 20:30:49.264329  585113 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:30:49.264565  585113 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:30:49.264627  585113 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:30:49.264641  585113 certs.go:256] generating profile certs ...
	I1205 20:30:49.264775  585113 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/client.key
	I1205 20:30:49.264854  585113 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/apiserver.key.5c723d79
	I1205 20:30:49.264894  585113 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/proxy-client.key
	I1205 20:30:49.265026  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:30:49.265094  585113 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:30:49.265109  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:30:49.265144  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:30:49.265179  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:30:49.265215  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:30:49.265258  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:30:49.266137  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:30:49.297886  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:30:49.339461  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:30:49.385855  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:30:49.427676  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1205 20:30:49.466359  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:30:49.492535  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:30:49.518311  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:30:49.543545  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:30:49.567956  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:30:49.592361  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:30:49.616245  585113 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:30:49.633947  585113 ssh_runner.go:195] Run: openssl version
	I1205 20:30:49.640353  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:30:49.652467  585113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:30:49.657353  585113 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:30:49.657440  585113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:30:49.664045  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:30:49.679941  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:30:49.695153  585113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:30:49.700397  585113 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:30:49.700458  585113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:30:49.706786  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:30:49.718994  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:30:49.731470  585113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:30:49.736654  585113 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:30:49.736725  585113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:30:49.743034  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:30:49.755334  585113 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:30:49.760378  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:30:49.766942  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:30:49.773911  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:30:49.780556  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:30:49.787004  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:30:49.793473  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:30:49.800009  585113 kubeadm.go:392] StartCluster: {Name:embed-certs-789000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-789000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:30:49.800118  585113 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:30:49.800163  585113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:30:49.844520  585113 cri.go:89] found id: ""
	I1205 20:30:49.844620  585113 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:30:49.857604  585113 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 20:30:49.857640  585113 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 20:30:49.857702  585113 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:30:49.870235  585113 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:30:49.871318  585113 kubeconfig.go:125] found "embed-certs-789000" server: "https://192.168.39.200:8443"
	I1205 20:30:49.873416  585113 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:30:49.884281  585113 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.200
	I1205 20:30:49.884331  585113 kubeadm.go:1160] stopping kube-system containers ...
	I1205 20:30:49.884348  585113 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:30:49.884410  585113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:30:49.930238  585113 cri.go:89] found id: ""
	I1205 20:30:49.930351  585113 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:30:49.947762  585113 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:30:49.957878  585113 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:30:49.957902  585113 kubeadm.go:157] found existing configuration files:
	
	I1205 20:30:49.957960  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:30:49.967261  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:30:49.967342  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:30:49.977868  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:30:49.987715  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:30:49.987777  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:30:49.998157  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:30:50.008224  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:30:50.008334  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:30:50.018748  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:30:50.028204  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:30:50.028287  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:30:50.038459  585113 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:30:50.049458  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:50.175199  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:47.606601  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:47.607065  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:47.607098  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:47.607001  586577 retry.go:31] will retry after 1.007867893s: waiting for machine to come up
	I1205 20:30:48.617140  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:48.617641  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:48.617674  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:48.617608  586577 retry.go:31] will retry after 1.15317606s: waiting for machine to come up
	I1205 20:30:49.773126  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:49.773670  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:49.773699  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:49.773620  586577 retry.go:31] will retry after 1.342422822s: waiting for machine to come up
	I1205 20:30:51.117592  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:51.118034  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:51.118065  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:51.117973  586577 retry.go:31] will retry after 1.575794078s: waiting for machine to come up
	I1205 20:30:51.203131  585113 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.027881984s)
	I1205 20:30:51.203193  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:51.415679  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:51.500984  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:51.598883  585113 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:30:51.598986  585113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:30:52.099206  585113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:30:52.599755  585113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:30:52.619189  585113 api_server.go:72] duration metric: took 1.020303049s to wait for apiserver process to appear ...
	I1205 20:30:52.619236  585113 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:30:52.619268  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:52.619903  585113 api_server.go:269] stopped: https://192.168.39.200:8443/healthz: Get "https://192.168.39.200:8443/healthz": dial tcp 192.168.39.200:8443: connect: connection refused
	I1205 20:30:53.119501  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:55.342363  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:30:55.342398  585113 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:30:55.342418  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:55.471683  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:30:55.471729  585113 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:30:55.619946  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:55.634855  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:30:55.634906  585113 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:30:56.119928  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:56.128358  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:30:56.128396  585113 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:30:56.620047  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:56.625869  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I1205 20:30:56.633658  585113 api_server.go:141] control plane version: v1.31.2
	I1205 20:30:56.633698  585113 api_server.go:131] duration metric: took 4.014451973s to wait for apiserver health ...
	I1205 20:30:56.633712  585113 cni.go:84] Creating CNI manager for ""
	I1205 20:30:56.633721  585113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:30:56.635658  585113 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:30:52.695389  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:52.695838  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:52.695868  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:52.695784  586577 retry.go:31] will retry after 2.377931285s: waiting for machine to come up
	I1205 20:30:55.076859  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:55.077428  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:55.077469  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:55.077377  586577 retry.go:31] will retry after 2.586837249s: waiting for machine to come up
	I1205 20:30:56.637276  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:30:56.649131  585113 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:30:56.670981  585113 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:30:56.682424  585113 system_pods.go:59] 8 kube-system pods found
	I1205 20:30:56.682497  585113 system_pods.go:61] "coredns-7c65d6cfc9-hrrjc" [43d8b550-f29d-4a84-a2fc-b456abc486c2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:30:56.682508  585113 system_pods.go:61] "etcd-embed-certs-789000" [99f232e4-1bc8-4f98-8bcf-8aa61d66158b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:30:56.682519  585113 system_pods.go:61] "kube-apiserver-embed-certs-789000" [d1d11749-0ddc-4172-aaa9-bca00c64c912] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:30:56.682528  585113 system_pods.go:61] "kube-controller-manager-embed-certs-789000" [b291c993-cd10-4d0f-8c3e-a6db726cf83a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:30:56.682536  585113 system_pods.go:61] "kube-proxy-h79dj" [80abe907-24e7-4001-90a6-f4d10fd9fc6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:30:56.682544  585113 system_pods.go:61] "kube-scheduler-embed-certs-789000" [490d7afa-24fd-43c8-8088-539bb7e1eb9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 20:30:56.682556  585113 system_pods.go:61] "metrics-server-6867b74b74-tlsjl" [cd1d73a4-27d1-4e68-b7d8-6da497fc4e53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:30:56.682570  585113 system_pods.go:61] "storage-provisioner" [3246e383-4f15-4222-a50c-c5b243fda12a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:30:56.682579  585113 system_pods.go:74] duration metric: took 11.566899ms to wait for pod list to return data ...
	I1205 20:30:56.682598  585113 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:30:56.687073  585113 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:30:56.687172  585113 node_conditions.go:123] node cpu capacity is 2
	I1205 20:30:56.687222  585113 node_conditions.go:105] duration metric: took 4.613225ms to run NodePressure ...
	I1205 20:30:56.687273  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:56.981686  585113 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 20:30:56.985944  585113 kubeadm.go:739] kubelet initialised
	I1205 20:30:56.985968  585113 kubeadm.go:740] duration metric: took 4.256434ms waiting for restarted kubelet to initialise ...
	I1205 20:30:56.985976  585113 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:30:56.991854  585113 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hrrjc" in "kube-system" namespace to be "Ready" ...
	I1205 20:30:58.997499  585113 pod_ready.go:103] pod "coredns-7c65d6cfc9-hrrjc" in "kube-system" namespace has status "Ready":"False"
	I1205 20:30:57.667200  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:57.667644  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:57.667681  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:57.667592  586577 retry.go:31] will retry after 2.856276116s: waiting for machine to come up
	I1205 20:31:00.525334  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:00.525796  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:31:00.525830  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:31:00.525740  586577 retry.go:31] will retry after 5.119761936s: waiting for machine to come up
	I1205 20:31:00.999102  585113 pod_ready.go:103] pod "coredns-7c65d6cfc9-hrrjc" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:01.500344  585113 pod_ready.go:93] pod "coredns-7c65d6cfc9-hrrjc" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:01.500371  585113 pod_ready.go:82] duration metric: took 4.508490852s for pod "coredns-7c65d6cfc9-hrrjc" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:01.500382  585113 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:03.506621  585113 pod_ready.go:103] pod "etcd-embed-certs-789000" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:05.007677  585113 pod_ready.go:93] pod "etcd-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:05.007703  585113 pod_ready.go:82] duration metric: took 3.507315826s for pod "etcd-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:05.007713  585113 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:05.646790  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.647230  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has current primary IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.647264  585602 main.go:141] libmachine: (old-k8s-version-386085) Found IP for machine: 192.168.72.144
	I1205 20:31:05.647278  585602 main.go:141] libmachine: (old-k8s-version-386085) Reserving static IP address...
	I1205 20:31:05.647796  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "old-k8s-version-386085", mac: "52:54:00:6a:06:a4", ip: "192.168.72.144"} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.647834  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | skip adding static IP to network mk-old-k8s-version-386085 - found existing host DHCP lease matching {name: "old-k8s-version-386085", mac: "52:54:00:6a:06:a4", ip: "192.168.72.144"}
	I1205 20:31:05.647856  585602 main.go:141] libmachine: (old-k8s-version-386085) Reserved static IP address: 192.168.72.144
	I1205 20:31:05.647872  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | Getting to WaitForSSH function...
	I1205 20:31:05.647889  585602 main.go:141] libmachine: (old-k8s-version-386085) Waiting for SSH to be available...
	I1205 20:31:05.650296  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.650610  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.650643  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.650742  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | Using SSH client type: external
	I1205 20:31:05.650779  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa (-rw-------)
	I1205 20:31:05.650816  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:31:05.650837  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | About to run SSH command:
	I1205 20:31:05.650851  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | exit 0
	I1205 20:31:05.776876  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | SSH cmd err, output: <nil>: 
	I1205 20:31:05.777311  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetConfigRaw
	I1205 20:31:05.777948  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:31:05.780609  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.781053  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.781091  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.781319  585602 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/config.json ...
	I1205 20:31:05.781585  585602 machine.go:93] provisionDockerMachine start ...
	I1205 20:31:05.781607  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:05.781942  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:05.784729  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.785155  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.785191  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.785326  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:05.785491  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:05.785659  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:05.785886  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:05.786078  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:05.786309  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:05.786323  585602 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:31:05.893034  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 20:31:05.893079  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetMachineName
	I1205 20:31:05.893388  585602 buildroot.go:166] provisioning hostname "old-k8s-version-386085"
	I1205 20:31:05.893426  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetMachineName
	I1205 20:31:05.893623  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:05.896484  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.896883  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.896910  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.897031  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:05.897252  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:05.897441  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:05.897615  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:05.897796  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:05.897965  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:05.897977  585602 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-386085 && echo "old-k8s-version-386085" | sudo tee /etc/hostname
	I1205 20:31:06.017910  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-386085
	
	I1205 20:31:06.017939  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.020956  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.021298  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.021332  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.021494  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.021678  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.021863  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.021995  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.022137  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:06.022325  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:06.022342  585602 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-386085' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-386085/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-386085' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:31:06.138200  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:31:06.138234  585602 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:31:06.138261  585602 buildroot.go:174] setting up certificates
	I1205 20:31:06.138274  585602 provision.go:84] configureAuth start
	I1205 20:31:06.138287  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetMachineName
	I1205 20:31:06.138588  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:31:06.141488  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.141909  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.141965  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.142096  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.144144  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.144720  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.144742  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.144951  585602 provision.go:143] copyHostCerts
	I1205 20:31:06.145020  585602 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:31:06.145031  585602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:31:06.145085  585602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:31:06.145206  585602 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:31:06.145219  585602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:31:06.145248  585602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:31:06.145335  585602 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:31:06.145346  585602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:31:06.145376  585602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:31:06.145452  585602 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-386085 san=[127.0.0.1 192.168.72.144 localhost minikube old-k8s-version-386085]
	I1205 20:31:06.276466  585602 provision.go:177] copyRemoteCerts
	I1205 20:31:06.276530  585602 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:31:06.276559  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.279218  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.279550  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.279578  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.279766  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.279990  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.280152  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.280317  585602 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:31:06.362479  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:31:06.387631  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1205 20:31:06.413110  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:31:06.437931  585602 provision.go:87] duration metric: took 299.641033ms to configureAuth
	I1205 20:31:06.437962  585602 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:31:06.438176  585602 config.go:182] Loaded profile config "old-k8s-version-386085": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 20:31:06.438272  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.441059  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.441413  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.441444  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.441655  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.441846  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.441992  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.442174  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.442379  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:06.442552  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:06.442568  585602 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:31:06.655666  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:31:06.655699  585602 machine.go:96] duration metric: took 874.099032ms to provisionDockerMachine
	I1205 20:31:06.655713  585602 start.go:293] postStartSetup for "old-k8s-version-386085" (driver="kvm2")
	I1205 20:31:06.655723  585602 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:31:06.655752  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.656082  585602 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:31:06.656115  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.658835  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.659178  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.659229  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.659378  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.659636  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.659808  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.659971  585602 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:31:06.744484  585602 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:31:06.749025  585602 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:31:06.749060  585602 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:31:06.749134  585602 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:31:06.749273  585602 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:31:06.749411  585602 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:31:06.760720  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:06.785449  585602 start.go:296] duration metric: took 129.720092ms for postStartSetup
	I1205 20:31:06.785500  585602 fix.go:56] duration metric: took 23.328073686s for fixHost
	I1205 20:31:06.785526  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.788417  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.788797  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.788828  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.789049  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.789296  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.789483  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.789688  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.789870  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:06.790046  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:06.790065  585602 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:31:06.897781  585929 start.go:364] duration metric: took 3m3.751494327s to acquireMachinesLock for "default-k8s-diff-port-942599"
	I1205 20:31:06.897847  585929 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:31:06.897858  585929 fix.go:54] fixHost starting: 
	I1205 20:31:06.898355  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:31:06.898419  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:31:06.916556  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40927
	I1205 20:31:06.917111  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:31:06.917648  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:31:06.917674  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:31:06.918014  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:31:06.918256  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:06.918402  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetState
	I1205 20:31:06.920077  585929 fix.go:112] recreateIfNeeded on default-k8s-diff-port-942599: state=Stopped err=<nil>
	I1205 20:31:06.920105  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	W1205 20:31:06.920257  585929 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 20:31:06.922145  585929 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-942599" ...
	I1205 20:31:06.923548  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Start
	I1205 20:31:06.923770  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Ensuring networks are active...
	I1205 20:31:06.924750  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Ensuring network default is active
	I1205 20:31:06.925240  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Ensuring network mk-default-k8s-diff-port-942599 is active
	I1205 20:31:06.925721  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Getting domain xml...
	I1205 20:31:06.926719  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Creating domain...
	I1205 20:31:06.897579  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430666.872047181
	
	I1205 20:31:06.897606  585602 fix.go:216] guest clock: 1733430666.872047181
	I1205 20:31:06.897615  585602 fix.go:229] Guest: 2024-12-05 20:31:06.872047181 +0000 UTC Remote: 2024-12-05 20:31:06.785506394 +0000 UTC m=+234.970971247 (delta=86.540787ms)
	I1205 20:31:06.897679  585602 fix.go:200] guest clock delta is within tolerance: 86.540787ms
	I1205 20:31:06.897691  585602 start.go:83] releasing machines lock for "old-k8s-version-386085", held for 23.440303187s
	I1205 20:31:06.897727  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.898085  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:31:06.901127  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.901530  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.901567  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.901719  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.902413  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.902626  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.902776  585602 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:31:06.902827  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.902878  585602 ssh_runner.go:195] Run: cat /version.json
	I1205 20:31:06.902903  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.905664  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.905912  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.906050  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.906086  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.906256  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.906341  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.906367  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.906411  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.906517  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.906613  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.906684  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.906837  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.906849  585602 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:31:06.907112  585602 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:31:06.986078  585602 ssh_runner.go:195] Run: systemctl --version
	I1205 20:31:07.009500  585602 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:31:07.159146  585602 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:31:07.166263  585602 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:31:07.166358  585602 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:31:07.186021  585602 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:31:07.186063  585602 start.go:495] detecting cgroup driver to use...
	I1205 20:31:07.186140  585602 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:31:07.205074  585602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:31:07.221207  585602 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:31:07.221268  585602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:31:07.236669  585602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:31:07.252848  585602 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:31:07.369389  585602 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:31:07.504993  585602 docker.go:233] disabling docker service ...
	I1205 20:31:07.505101  585602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:31:07.523294  585602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:31:07.538595  585602 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:31:07.687830  585602 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:31:07.816176  585602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:31:07.833624  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:31:07.853409  585602 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1205 20:31:07.853478  585602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:07.865346  585602 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:31:07.865426  585602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:07.877962  585602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:07.889255  585602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:07.901632  585602 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:31:07.916169  585602 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:31:07.927092  585602 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:31:07.927169  585602 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:31:07.942288  585602 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:31:07.953314  585602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:08.092156  585602 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:31:08.205715  585602 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:31:08.205799  585602 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:31:08.214280  585602 start.go:563] Will wait 60s for crictl version
	I1205 20:31:08.214351  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:08.220837  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:31:08.265983  585602 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:31:08.266065  585602 ssh_runner.go:195] Run: crio --version
	I1205 20:31:08.295839  585602 ssh_runner.go:195] Run: crio --version
	I1205 20:31:08.327805  585602 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1205 20:31:07.014634  585113 pod_ready.go:103] pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:08.018024  585113 pod_ready.go:93] pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:08.018062  585113 pod_ready.go:82] duration metric: took 3.010340127s for pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.018080  585113 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.024700  585113 pod_ready.go:93] pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:08.024731  585113 pod_ready.go:82] duration metric: took 6.639434ms for pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.024744  585113 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-h79dj" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.030379  585113 pod_ready.go:93] pod "kube-proxy-h79dj" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:08.030399  585113 pod_ready.go:82] duration metric: took 5.648086ms for pod "kube-proxy-h79dj" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.030408  585113 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.036191  585113 pod_ready.go:93] pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:08.036211  585113 pod_ready.go:82] duration metric: took 5.797344ms for pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.036223  585113 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:10.051737  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:08.329278  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:31:08.332352  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:08.332700  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:08.332747  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:08.332930  585602 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1205 20:31:08.337611  585602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:08.350860  585602 kubeadm.go:883] updating cluster {Name:old-k8s-version-386085 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-386085 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:31:08.351016  585602 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 20:31:08.351090  585602 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:08.403640  585602 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 20:31:08.403716  585602 ssh_runner.go:195] Run: which lz4
	I1205 20:31:08.408211  585602 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 20:31:08.413136  585602 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:31:08.413168  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1205 20:31:10.209351  585602 crio.go:462] duration metric: took 1.801169802s to copy over tarball
	I1205 20:31:10.209438  585602 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:31:08.255781  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting to get IP...
	I1205 20:31:08.256721  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.257183  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.257262  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:08.257164  586715 retry.go:31] will retry after 301.077952ms: waiting for machine to come up
	I1205 20:31:08.559682  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.560187  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.560216  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:08.560130  586715 retry.go:31] will retry after 364.457823ms: waiting for machine to come up
	I1205 20:31:08.926774  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.927371  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.927401  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:08.927274  586715 retry.go:31] will retry after 461.958198ms: waiting for machine to come up
	I1205 20:31:09.390861  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:09.391502  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:09.391531  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:09.391432  586715 retry.go:31] will retry after 587.049038ms: waiting for machine to come up
	I1205 20:31:09.980451  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:09.980999  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:09.981026  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:09.980932  586715 retry.go:31] will retry after 499.551949ms: waiting for machine to come up
	I1205 20:31:10.482653  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:10.483188  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:10.483219  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:10.483135  586715 retry.go:31] will retry after 749.476034ms: waiting for machine to come up
	I1205 20:31:11.233788  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:11.234286  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:11.234315  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:11.234227  586715 retry.go:31] will retry after 768.81557ms: waiting for machine to come up
	I1205 20:31:12.004904  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:12.005427  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:12.005460  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:12.005382  586715 retry.go:31] will retry after 1.360132177s: waiting for machine to come up
	I1205 20:31:12.549406  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:15.043540  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:13.303553  585602 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.094044744s)
	I1205 20:31:13.303598  585602 crio.go:469] duration metric: took 3.094215888s to extract the tarball
	I1205 20:31:13.303610  585602 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:31:13.350989  585602 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:13.388660  585602 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 20:31:13.388702  585602 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 20:31:13.388814  585602 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:13.388822  585602 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.388832  585602 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.388853  585602 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.388881  585602 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.388904  585602 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1205 20:31:13.388823  585602 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.388859  585602 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.390414  585602 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1205 20:31:13.390924  585602 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.390941  585602 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.390924  585602 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.391016  585602 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.390927  585602 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.391373  585602 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.391378  585602 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:13.565006  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.577450  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1205 20:31:13.584653  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.597086  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.619848  585602 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1205 20:31:13.619899  585602 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.619955  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.623277  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.628407  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.697151  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.703111  585602 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1205 20:31:13.703167  585602 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1205 20:31:13.703219  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.736004  585602 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1205 20:31:13.736059  585602 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.736058  585602 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1205 20:31:13.736078  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.736094  585602 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.736104  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.736135  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.736187  585602 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1205 20:31:13.736207  585602 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.736235  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.783651  585602 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1205 20:31:13.783706  585602 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.783758  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.787597  585602 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1205 20:31:13.787649  585602 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.787656  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 20:31:13.787692  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.828445  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.828491  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.828544  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.828573  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.828616  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.828635  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.890937  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 20:31:13.992480  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.992480  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.992600  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.992661  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.992725  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.992780  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:14.095364  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 20:31:14.095462  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:14.163224  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1205 20:31:14.163320  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:14.163339  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:14.163420  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 20:31:14.163510  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:14.243805  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1205 20:31:14.243860  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1205 20:31:14.243881  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1205 20:31:14.287718  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1205 20:31:14.290994  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1205 20:31:14.291049  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1205 20:31:14.579648  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:14.728232  585602 cache_images.go:92] duration metric: took 1.339506459s to LoadCachedImages
	W1205 20:31:14.728389  585602 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I1205 20:31:14.728417  585602 kubeadm.go:934] updating node { 192.168.72.144 8443 v1.20.0 crio true true} ...
	I1205 20:31:14.728570  585602 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-386085 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386085 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:31:14.728672  585602 ssh_runner.go:195] Run: crio config
	I1205 20:31:14.778932  585602 cni.go:84] Creating CNI manager for ""
	I1205 20:31:14.778957  585602 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:31:14.778967  585602 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:31:14.778987  585602 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.144 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-386085 NodeName:old-k8s-version-386085 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1205 20:31:14.779131  585602 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-386085"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:31:14.779196  585602 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1205 20:31:14.792400  585602 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:31:14.792494  585602 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:31:14.802873  585602 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1205 20:31:14.821562  585602 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:31:14.839442  585602 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1205 20:31:14.861314  585602 ssh_runner.go:195] Run: grep 192.168.72.144	control-plane.minikube.internal$ /etc/hosts
	I1205 20:31:14.865457  585602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:14.878278  585602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:15.002193  585602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:31:15.030699  585602 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085 for IP: 192.168.72.144
	I1205 20:31:15.030734  585602 certs.go:194] generating shared ca certs ...
	I1205 20:31:15.030758  585602 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:31:15.030975  585602 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:31:15.031027  585602 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:31:15.031048  585602 certs.go:256] generating profile certs ...
	I1205 20:31:15.031206  585602 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.key
	I1205 20:31:15.031276  585602 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.key.87b35b18
	I1205 20:31:15.031324  585602 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.key
	I1205 20:31:15.031489  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:31:15.031535  585602 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:31:15.031550  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:31:15.031581  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:31:15.031612  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:31:15.031644  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:31:15.031698  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:15.032410  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:31:15.063090  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:31:15.094212  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:31:15.124685  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:31:15.159953  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1205 20:31:15.204250  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:31:15.237483  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:31:15.276431  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:31:15.303774  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:31:15.328872  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:31:15.353852  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:31:15.380916  585602 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:31:15.401082  585602 ssh_runner.go:195] Run: openssl version
	I1205 20:31:15.407442  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:31:15.420377  585602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:15.425721  585602 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:15.425800  585602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:15.432475  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:31:15.446140  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:31:15.459709  585602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:31:15.465165  585602 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:31:15.465241  585602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:31:15.471609  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:31:15.484139  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:31:15.496636  585602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:31:15.501575  585602 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:31:15.501634  585602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:31:15.507814  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:31:15.521234  585602 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:31:15.526452  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:31:15.532999  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:31:15.540680  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:31:15.547455  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:31:15.553996  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:31:15.560574  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:31:15.568489  585602 kubeadm.go:392] StartCluster: {Name:old-k8s-version-386085 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-386085 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:31:15.568602  585602 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:31:15.568682  585602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:31:15.610693  585602 cri.go:89] found id: ""
	I1205 20:31:15.610808  585602 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:31:15.622685  585602 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 20:31:15.622709  585602 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 20:31:15.622764  585602 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:31:15.633754  585602 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:31:15.634922  585602 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-386085" does not appear in /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:31:15.635682  585602 kubeconfig.go:62] /home/jenkins/minikube-integration/20052-530897/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-386085" cluster setting kubeconfig missing "old-k8s-version-386085" context setting]
	I1205 20:31:15.636878  585602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:31:15.719767  585602 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:31:15.731576  585602 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.144
	I1205 20:31:15.731622  585602 kubeadm.go:1160] stopping kube-system containers ...
	I1205 20:31:15.731639  585602 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:31:15.731705  585602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:31:15.777769  585602 cri.go:89] found id: ""
	I1205 20:31:15.777875  585602 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:31:15.797121  585602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:31:15.807961  585602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:31:15.807991  585602 kubeadm.go:157] found existing configuration files:
	
	I1205 20:31:15.808042  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:31:15.818177  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:31:15.818270  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:31:15.829092  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:31:15.839471  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:31:15.839564  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:31:15.850035  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:31:15.859907  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:31:15.859984  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:31:15.870882  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:31:15.881475  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:31:15.881549  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:31:15.892078  585602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:31:15.904312  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:16.042308  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:16.787487  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:13.367666  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:13.368154  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:13.368185  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:13.368096  586715 retry.go:31] will retry after 1.319101375s: waiting for machine to come up
	I1205 20:31:14.689562  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:14.690039  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:14.690067  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:14.689996  586715 retry.go:31] will retry after 2.267379471s: waiting for machine to come up
	I1205 20:31:16.959412  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:16.959882  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:16.959915  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:16.959804  586715 retry.go:31] will retry after 2.871837018s: waiting for machine to come up
	I1205 20:31:17.044878  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:19.543265  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:17.036864  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:17.128855  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:17.219276  585602 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:31:17.219380  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:17.720206  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:18.219623  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:18.719555  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:19.219776  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:19.719967  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:20.219686  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:20.719806  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:21.219875  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:21.719915  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:19.834750  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:19.835299  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:19.835326  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:19.835203  586715 retry.go:31] will retry after 2.740879193s: waiting for machine to come up
	I1205 20:31:22.577264  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:22.577746  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:22.577775  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:22.577709  586715 retry.go:31] will retry after 3.807887487s: waiting for machine to come up
	I1205 20:31:22.043635  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:24.543255  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:22.219930  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:22.719848  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:23.219674  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:23.719903  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:24.220505  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:24.719726  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:25.220161  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:25.720115  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:26.220399  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:26.719567  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:27.669618  585025 start.go:364] duration metric: took 59.106849765s to acquireMachinesLock for "no-preload-816185"
	I1205 20:31:27.669680  585025 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:31:27.669689  585025 fix.go:54] fixHost starting: 
	I1205 20:31:27.670111  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:31:27.670153  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:31:27.689600  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40519
	I1205 20:31:27.690043  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:31:27.690508  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:31:27.690530  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:31:27.690931  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:31:27.691146  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:27.691279  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetState
	I1205 20:31:27.692881  585025 fix.go:112] recreateIfNeeded on no-preload-816185: state=Stopped err=<nil>
	I1205 20:31:27.692905  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	W1205 20:31:27.693059  585025 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 20:31:27.694833  585025 out.go:177] * Restarting existing kvm2 VM for "no-preload-816185" ...
	I1205 20:31:26.389296  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.389828  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Found IP for machine: 192.168.50.96
	I1205 20:31:26.389866  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has current primary IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.389876  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Reserving static IP address...
	I1205 20:31:26.390321  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Reserved static IP address: 192.168.50.96
	I1205 20:31:26.390354  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for SSH to be available...
	I1205 20:31:26.390380  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-942599", mac: "52:54:00:f6:dd:0f", ip: "192.168.50.96"} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.390404  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | skip adding static IP to network mk-default-k8s-diff-port-942599 - found existing host DHCP lease matching {name: "default-k8s-diff-port-942599", mac: "52:54:00:f6:dd:0f", ip: "192.168.50.96"}
	I1205 20:31:26.390420  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Getting to WaitForSSH function...
	I1205 20:31:26.392509  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.392875  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.392912  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.392933  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Using SSH client type: external
	I1205 20:31:26.392988  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa (-rw-------)
	I1205 20:31:26.393057  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:31:26.393086  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | About to run SSH command:
	I1205 20:31:26.393105  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | exit 0
	I1205 20:31:26.520867  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | SSH cmd err, output: <nil>: 
	I1205 20:31:26.521212  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetConfigRaw
	I1205 20:31:26.521857  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetIP
	I1205 20:31:26.524512  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.524853  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.524883  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.525141  585929 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/config.json ...
	I1205 20:31:26.525404  585929 machine.go:93] provisionDockerMachine start ...
	I1205 20:31:26.525425  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:26.525639  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:26.527806  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.528094  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.528121  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.528257  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:26.528474  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.528635  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.528771  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:26.528902  585929 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:26.529132  585929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I1205 20:31:26.529147  585929 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:31:26.645385  585929 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 20:31:26.645429  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetMachineName
	I1205 20:31:26.645719  585929 buildroot.go:166] provisioning hostname "default-k8s-diff-port-942599"
	I1205 20:31:26.645751  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetMachineName
	I1205 20:31:26.645962  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:26.648906  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.649316  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.649346  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.649473  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:26.649686  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.649880  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.649998  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:26.650161  585929 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:26.650338  585929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I1205 20:31:26.650354  585929 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-942599 && echo "default-k8s-diff-port-942599" | sudo tee /etc/hostname
	I1205 20:31:26.780217  585929 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-942599
	
	I1205 20:31:26.780253  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:26.783240  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.783628  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.783660  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.783804  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:26.783997  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.784162  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.784321  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:26.784530  585929 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:26.784747  585929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I1205 20:31:26.784766  585929 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-942599' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-942599/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-942599' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:31:26.909975  585929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:31:26.910006  585929 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:31:26.910087  585929 buildroot.go:174] setting up certificates
	I1205 20:31:26.910101  585929 provision.go:84] configureAuth start
	I1205 20:31:26.910114  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetMachineName
	I1205 20:31:26.910440  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetIP
	I1205 20:31:26.913667  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.914067  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.914094  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.914321  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:26.917031  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.917430  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.917462  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.917608  585929 provision.go:143] copyHostCerts
	I1205 20:31:26.917681  585929 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:31:26.917706  585929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:31:26.917772  585929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:31:26.917889  585929 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:31:26.917900  585929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:31:26.917935  585929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:31:26.918013  585929 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:31:26.918023  585929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:31:26.918065  585929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:31:26.918163  585929 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-942599 san=[127.0.0.1 192.168.50.96 default-k8s-diff-port-942599 localhost minikube]
	I1205 20:31:27.003691  585929 provision.go:177] copyRemoteCerts
	I1205 20:31:27.003783  585929 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:31:27.003821  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.006311  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.006632  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.006665  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.006820  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.007011  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.007153  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.007274  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:31:27.094973  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:31:27.121684  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1205 20:31:27.146420  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:31:27.171049  585929 provision.go:87] duration metric: took 260.930345ms to configureAuth
	I1205 20:31:27.171083  585929 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:31:27.171268  585929 config.go:182] Loaded profile config "default-k8s-diff-port-942599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:31:27.171385  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.174287  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.174677  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.174717  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.174946  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.175168  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.175338  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.175531  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.175703  585929 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:27.175927  585929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I1205 20:31:27.175959  585929 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:31:27.416697  585929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:31:27.416724  585929 machine.go:96] duration metric: took 891.305367ms to provisionDockerMachine
	I1205 20:31:27.416737  585929 start.go:293] postStartSetup for "default-k8s-diff-port-942599" (driver="kvm2")
	I1205 20:31:27.416748  585929 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:31:27.416786  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:27.417143  585929 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:31:27.417183  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.419694  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.420041  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.420072  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.420259  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.420488  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.420681  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.420813  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:31:27.507592  585929 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:31:27.512178  585929 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:31:27.512209  585929 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:31:27.512297  585929 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:31:27.512416  585929 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:31:27.512544  585929 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:31:27.522860  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:27.550167  585929 start.go:296] duration metric: took 133.414654ms for postStartSetup
	I1205 20:31:27.550211  585929 fix.go:56] duration metric: took 20.652352836s for fixHost
	I1205 20:31:27.550240  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.553056  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.553456  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.553490  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.553631  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.553822  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.554007  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.554166  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.554372  585929 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:27.554584  585929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I1205 20:31:27.554603  585929 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:31:27.669428  585929 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430687.619179277
	
	I1205 20:31:27.669455  585929 fix.go:216] guest clock: 1733430687.619179277
	I1205 20:31:27.669467  585929 fix.go:229] Guest: 2024-12-05 20:31:27.619179277 +0000 UTC Remote: 2024-12-05 20:31:27.550217419 +0000 UTC m=+204.551998169 (delta=68.961858ms)
	I1205 20:31:27.669506  585929 fix.go:200] guest clock delta is within tolerance: 68.961858ms
	I1205 20:31:27.669514  585929 start.go:83] releasing machines lock for "default-k8s-diff-port-942599", held for 20.771694403s
	I1205 20:31:27.669559  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:27.669877  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetIP
	I1205 20:31:27.672547  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.672978  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.673009  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.673224  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:27.673788  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:27.673992  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:27.674125  585929 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:31:27.674176  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.674201  585929 ssh_runner.go:195] Run: cat /version.json
	I1205 20:31:27.674231  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.677006  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.677388  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.677418  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.677437  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.677565  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.677745  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.677919  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.677925  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.677948  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.678115  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.678107  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:31:27.678258  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.678382  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.678527  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:31:27.790786  585929 ssh_runner.go:195] Run: systemctl --version
	I1205 20:31:27.797092  585929 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:31:27.946053  585929 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:31:27.953979  585929 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:31:27.954073  585929 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:31:27.975059  585929 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:31:27.975090  585929 start.go:495] detecting cgroup driver to use...
	I1205 20:31:27.975160  585929 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:31:27.991738  585929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:31:28.006412  585929 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:31:28.006529  585929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:31:28.021329  585929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:31:28.037390  585929 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:31:28.155470  585929 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:31:28.326332  585929 docker.go:233] disabling docker service ...
	I1205 20:31:28.326415  585929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:31:28.343299  585929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:31:28.358147  585929 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:31:28.493547  585929 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:31:28.631184  585929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:31:28.647267  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:31:28.670176  585929 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:31:28.670269  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.686230  585929 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:31:28.686312  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.702991  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.715390  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.731909  585929 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:31:28.745042  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.757462  585929 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.779049  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.790960  585929 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:31:28.806652  585929 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:31:28.806724  585929 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:31:28.821835  585929 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:31:28.832688  585929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:28.967877  585929 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:31:29.084571  585929 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:31:29.084666  585929 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:31:29.089892  585929 start.go:563] Will wait 60s for crictl version
	I1205 20:31:29.089958  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:31:29.094021  585929 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:31:29.132755  585929 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:31:29.132843  585929 ssh_runner.go:195] Run: crio --version
	I1205 20:31:29.161779  585929 ssh_runner.go:195] Run: crio --version
	I1205 20:31:29.194415  585929 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:31:27.042893  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:29.545284  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:27.696342  585025 main.go:141] libmachine: (no-preload-816185) Calling .Start
	I1205 20:31:27.696546  585025 main.go:141] libmachine: (no-preload-816185) Ensuring networks are active...
	I1205 20:31:27.697272  585025 main.go:141] libmachine: (no-preload-816185) Ensuring network default is active
	I1205 20:31:27.697720  585025 main.go:141] libmachine: (no-preload-816185) Ensuring network mk-no-preload-816185 is active
	I1205 20:31:27.698153  585025 main.go:141] libmachine: (no-preload-816185) Getting domain xml...
	I1205 20:31:27.698993  585025 main.go:141] libmachine: (no-preload-816185) Creating domain...
	I1205 20:31:29.005551  585025 main.go:141] libmachine: (no-preload-816185) Waiting to get IP...
	I1205 20:31:29.006633  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:29.007124  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:29.007217  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:29.007100  586921 retry.go:31] will retry after 264.716976ms: waiting for machine to come up
	I1205 20:31:29.273821  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:29.274364  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:29.274393  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:29.274318  586921 retry.go:31] will retry after 307.156436ms: waiting for machine to come up
	I1205 20:31:29.582968  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:29.583583  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:29.583621  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:29.583531  586921 retry.go:31] will retry after 335.63624ms: waiting for machine to come up
	I1205 20:31:29.921262  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:29.921823  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:29.921855  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:29.921771  586921 retry.go:31] will retry after 577.408278ms: waiting for machine to come up
	I1205 20:31:30.500556  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:30.501058  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:30.501095  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:30.500999  586921 retry.go:31] will retry after 757.019094ms: waiting for machine to come up
	I1205 20:31:27.220124  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:27.719460  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:28.220187  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:28.719599  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:29.219672  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:29.720450  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:30.220436  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:30.719573  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:31.220357  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:31.720052  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:29.195845  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetIP
	I1205 20:31:29.198779  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:29.199138  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:29.199171  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:29.199365  585929 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1205 20:31:29.204553  585929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:29.217722  585929 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-942599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-942599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.96 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:31:29.217873  585929 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:31:29.217943  585929 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:29.259006  585929 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 20:31:29.259105  585929 ssh_runner.go:195] Run: which lz4
	I1205 20:31:29.264049  585929 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 20:31:29.268978  585929 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:31:29.269019  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 20:31:30.811247  585929 crio.go:462] duration metric: took 1.547244528s to copy over tarball
	I1205 20:31:30.811340  585929 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:31:32.043543  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:34.044420  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:31.260083  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:31.260626  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:31.260658  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:31.260593  586921 retry.go:31] will retry after 593.111543ms: waiting for machine to come up
	I1205 20:31:31.854850  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:31.855286  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:31.855316  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:31.855224  586921 retry.go:31] will retry after 832.693762ms: waiting for machine to come up
	I1205 20:31:32.690035  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:32.690489  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:32.690515  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:32.690448  586921 retry.go:31] will retry after 1.128242733s: waiting for machine to come up
	I1205 20:31:33.820162  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:33.820798  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:33.820831  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:33.820732  586921 retry.go:31] will retry after 1.331730925s: waiting for machine to come up
	I1205 20:31:35.154230  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:35.154661  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:35.154690  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:35.154590  586921 retry.go:31] will retry after 2.19623815s: waiting for machine to come up
	I1205 20:31:32.220318  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:32.719780  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:33.220114  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:33.719554  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:34.220187  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:34.720021  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:35.219461  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:35.720334  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.219480  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.720159  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:33.093756  585929 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.282380101s)
	I1205 20:31:33.093791  585929 crio.go:469] duration metric: took 2.282510298s to extract the tarball
	I1205 20:31:33.093802  585929 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:31:33.132232  585929 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:33.188834  585929 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:31:33.188868  585929 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:31:33.188879  585929 kubeadm.go:934] updating node { 192.168.50.96 8444 v1.31.2 crio true true} ...
	I1205 20:31:33.189027  585929 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-942599 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-942599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:31:33.189114  585929 ssh_runner.go:195] Run: crio config
	I1205 20:31:33.235586  585929 cni.go:84] Creating CNI manager for ""
	I1205 20:31:33.235611  585929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:31:33.235621  585929 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:31:33.235644  585929 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.96 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-942599 NodeName:default-k8s-diff-port-942599 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:31:33.235770  585929 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.96
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-942599"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.96"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.96"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:31:33.235835  585929 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:31:33.246737  585929 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:31:33.246829  585929 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:31:33.257763  585929 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1205 20:31:33.276025  585929 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:31:33.294008  585929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1205 20:31:33.311640  585929 ssh_runner.go:195] Run: grep 192.168.50.96	control-plane.minikube.internal$ /etc/hosts
	I1205 20:31:33.315963  585929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.96	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:33.328834  585929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:33.439221  585929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:31:33.457075  585929 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599 for IP: 192.168.50.96
	I1205 20:31:33.457103  585929 certs.go:194] generating shared ca certs ...
	I1205 20:31:33.457131  585929 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:31:33.457337  585929 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:31:33.457407  585929 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:31:33.457420  585929 certs.go:256] generating profile certs ...
	I1205 20:31:33.457528  585929 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/client.key
	I1205 20:31:33.457612  585929 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/apiserver.key.d50b8fb2
	I1205 20:31:33.457668  585929 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/proxy-client.key
	I1205 20:31:33.457824  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:31:33.457870  585929 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:31:33.457885  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:31:33.457924  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:31:33.457959  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:31:33.457989  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:31:33.458044  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:33.459092  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:31:33.502129  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:31:33.533461  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:31:33.572210  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:31:33.597643  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1205 20:31:33.621382  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:31:33.648568  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:31:33.682320  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:31:33.707415  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:31:33.733418  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:31:33.760333  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:31:33.794070  585929 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:31:33.813531  585929 ssh_runner.go:195] Run: openssl version
	I1205 20:31:33.820336  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:31:33.832321  585929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:33.839066  585929 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:33.839135  585929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:33.845526  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:31:33.857376  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:31:33.868864  585929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:31:33.873732  585929 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:31:33.873799  585929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:31:33.881275  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:31:33.893144  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:31:33.904679  585929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:31:33.909686  585929 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:31:33.909760  585929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:31:33.915937  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:31:33.927401  585929 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:31:33.932326  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:31:33.939165  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:31:33.945630  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:31:33.951867  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:31:33.957857  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:31:33.963994  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:31:33.969964  585929 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-942599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-942599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.96 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:31:33.970050  585929 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:31:33.970103  585929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:31:34.016733  585929 cri.go:89] found id: ""
	I1205 20:31:34.016814  585929 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:31:34.027459  585929 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 20:31:34.027478  585929 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 20:31:34.027523  585929 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:31:34.037483  585929 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:31:34.038588  585929 kubeconfig.go:125] found "default-k8s-diff-port-942599" server: "https://192.168.50.96:8444"
	I1205 20:31:34.041140  585929 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:31:34.050903  585929 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.96
	I1205 20:31:34.050938  585929 kubeadm.go:1160] stopping kube-system containers ...
	I1205 20:31:34.050956  585929 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:31:34.051014  585929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:31:34.090840  585929 cri.go:89] found id: ""
	I1205 20:31:34.090932  585929 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:31:34.107686  585929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:31:34.118277  585929 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:31:34.118305  585929 kubeadm.go:157] found existing configuration files:
	
	I1205 20:31:34.118359  585929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1205 20:31:34.127654  585929 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:31:34.127733  585929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:31:34.137295  585929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1205 20:31:34.147005  585929 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:31:34.147076  585929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:31:34.158576  585929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1205 20:31:34.167933  585929 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:31:34.168022  585929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:31:34.177897  585929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1205 20:31:34.187467  585929 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:31:34.187539  585929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:31:34.197825  585929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:31:34.210775  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:34.337491  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:35.308389  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:35.549708  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:35.624390  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:35.706794  585929 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:31:35.706912  585929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.207620  585929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.707990  585929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.727214  585929 api_server.go:72] duration metric: took 1.020418782s to wait for apiserver process to appear ...
	I1205 20:31:36.727257  585929 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:31:36.727289  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:36.727908  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": dial tcp 192.168.50.96:8444: connect: connection refused
	I1205 20:31:37.228102  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:36.544564  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:39.043806  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:37.352371  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:37.352911  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:37.352946  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:37.352862  586921 retry.go:31] will retry after 2.333670622s: waiting for machine to come up
	I1205 20:31:39.688034  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:39.688597  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:39.688630  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:39.688537  586921 retry.go:31] will retry after 2.476657304s: waiting for machine to come up
	I1205 20:31:37.219933  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:37.720360  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:38.219574  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:38.720034  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:39.219449  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:39.719752  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:40.219718  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:40.719771  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:41.219548  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:41.720381  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:42.228416  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:31:42.228489  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:41.044569  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:43.542439  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:45.543063  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:42.168384  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:42.168759  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:42.168781  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:42.168719  586921 retry.go:31] will retry after 3.531210877s: waiting for machine to come up
	I1205 20:31:45.701387  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.701831  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has current primary IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.701868  585025 main.go:141] libmachine: (no-preload-816185) Found IP for machine: 192.168.61.37
	I1205 20:31:45.701882  585025 main.go:141] libmachine: (no-preload-816185) Reserving static IP address...
	I1205 20:31:45.702270  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "no-preload-816185", mac: "52:54:00:5f:85:a7", ip: "192.168.61.37"} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:45.702313  585025 main.go:141] libmachine: (no-preload-816185) DBG | skip adding static IP to network mk-no-preload-816185 - found existing host DHCP lease matching {name: "no-preload-816185", mac: "52:54:00:5f:85:a7", ip: "192.168.61.37"}
	I1205 20:31:45.702327  585025 main.go:141] libmachine: (no-preload-816185) Reserved static IP address: 192.168.61.37
	I1205 20:31:45.702343  585025 main.go:141] libmachine: (no-preload-816185) Waiting for SSH to be available...
	I1205 20:31:45.702355  585025 main.go:141] libmachine: (no-preload-816185) DBG | Getting to WaitForSSH function...
	I1205 20:31:45.704606  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.704941  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:45.704964  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.705115  585025 main.go:141] libmachine: (no-preload-816185) DBG | Using SSH client type: external
	I1205 20:31:45.705146  585025 main.go:141] libmachine: (no-preload-816185) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa (-rw-------)
	I1205 20:31:45.705181  585025 main.go:141] libmachine: (no-preload-816185) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.37 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:31:45.705212  585025 main.go:141] libmachine: (no-preload-816185) DBG | About to run SSH command:
	I1205 20:31:45.705224  585025 main.go:141] libmachine: (no-preload-816185) DBG | exit 0
	I1205 20:31:45.828472  585025 main.go:141] libmachine: (no-preload-816185) DBG | SSH cmd err, output: <nil>: 
	I1205 20:31:45.828882  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetConfigRaw
	I1205 20:31:45.829596  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetIP
	I1205 20:31:45.832338  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.832643  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:45.832671  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.832970  585025 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/config.json ...
	I1205 20:31:45.833244  585025 machine.go:93] provisionDockerMachine start ...
	I1205 20:31:45.833275  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:45.833498  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:45.835937  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.836344  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:45.836375  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.836555  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:45.836744  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:45.836906  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:45.837046  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:45.837207  585025 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:45.837441  585025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.37 22 <nil> <nil>}
	I1205 20:31:45.837456  585025 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:31:45.940890  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 20:31:45.940926  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetMachineName
	I1205 20:31:45.941234  585025 buildroot.go:166] provisioning hostname "no-preload-816185"
	I1205 20:31:45.941262  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetMachineName
	I1205 20:31:45.941453  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:45.944124  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.944537  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:45.944585  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.944677  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:45.944862  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:45.945026  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:45.945169  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:45.945343  585025 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:45.945511  585025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.37 22 <nil> <nil>}
	I1205 20:31:45.945523  585025 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-816185 && echo "no-preload-816185" | sudo tee /etc/hostname
	I1205 20:31:42.220435  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:42.720366  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:43.219567  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:43.719652  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:44.220259  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:44.719556  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:45.219850  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:45.720302  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:46.220377  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:46.720107  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:47.229369  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:31:47.229421  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:46.063755  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-816185
	
	I1205 20:31:46.063794  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:46.066742  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.067177  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.067208  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.067371  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:46.067576  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.067756  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.067937  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:46.068147  585025 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:46.068392  585025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.37 22 <nil> <nil>}
	I1205 20:31:46.068411  585025 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-816185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-816185/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-816185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:31:46.182072  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:31:46.182110  585025 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:31:46.182144  585025 buildroot.go:174] setting up certificates
	I1205 20:31:46.182160  585025 provision.go:84] configureAuth start
	I1205 20:31:46.182172  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetMachineName
	I1205 20:31:46.182490  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetIP
	I1205 20:31:46.185131  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.185461  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.185493  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.185684  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:46.188070  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.188467  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.188499  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.188606  585025 provision.go:143] copyHostCerts
	I1205 20:31:46.188674  585025 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:31:46.188695  585025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:31:46.188753  585025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:31:46.188860  585025 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:31:46.188872  585025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:31:46.188892  585025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:31:46.188973  585025 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:31:46.188980  585025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:31:46.188998  585025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:31:46.189044  585025 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.no-preload-816185 san=[127.0.0.1 192.168.61.37 localhost minikube no-preload-816185]
	I1205 20:31:46.460195  585025 provision.go:177] copyRemoteCerts
	I1205 20:31:46.460323  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:31:46.460394  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:46.463701  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.464171  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.464224  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.464422  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:46.464646  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.464839  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:46.465024  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:31:46.557665  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 20:31:46.583225  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:31:46.608114  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:31:46.633059  585025 provision.go:87] duration metric: took 450.879004ms to configureAuth
	I1205 20:31:46.633100  585025 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:31:46.633319  585025 config.go:182] Loaded profile config "no-preload-816185": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:31:46.633400  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:46.636634  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.637103  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.637138  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.637368  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:46.637624  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.637841  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.638000  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:46.638189  585025 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:46.638425  585025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.37 22 <nil> <nil>}
	I1205 20:31:46.638442  585025 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:31:46.877574  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:31:46.877610  585025 machine.go:96] duration metric: took 1.044347044s to provisionDockerMachine
	I1205 20:31:46.877623  585025 start.go:293] postStartSetup for "no-preload-816185" (driver="kvm2")
	I1205 20:31:46.877634  585025 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:31:46.877668  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:46.878007  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:31:46.878046  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:46.881022  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.881361  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.881422  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.881554  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:46.881741  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.881883  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:46.882045  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:31:46.967997  585025 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:31:46.972667  585025 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:31:46.972697  585025 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:31:46.972770  585025 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:31:46.972844  585025 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:31:46.972931  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:31:46.983157  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:47.009228  585025 start.go:296] duration metric: took 131.588013ms for postStartSetup
	I1205 20:31:47.009272  585025 fix.go:56] duration metric: took 19.33958416s for fixHost
	I1205 20:31:47.009296  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:47.012039  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.012388  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:47.012416  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.012620  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:47.012858  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:47.013022  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:47.013166  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:47.013318  585025 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:47.013490  585025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.37 22 <nil> <nil>}
	I1205 20:31:47.013501  585025 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:31:47.117166  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430707.083043174
	
	I1205 20:31:47.117195  585025 fix.go:216] guest clock: 1733430707.083043174
	I1205 20:31:47.117203  585025 fix.go:229] Guest: 2024-12-05 20:31:47.083043174 +0000 UTC Remote: 2024-12-05 20:31:47.009275956 +0000 UTC m=+361.003271038 (delta=73.767218ms)
	I1205 20:31:47.117226  585025 fix.go:200] guest clock delta is within tolerance: 73.767218ms
	I1205 20:31:47.117232  585025 start.go:83] releasing machines lock for "no-preload-816185", held for 19.447576666s
	I1205 20:31:47.117259  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:47.117541  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetIP
	I1205 20:31:47.120283  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.120627  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:47.120653  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.120805  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:47.121301  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:47.121492  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:47.121612  585025 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:31:47.121656  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:47.121727  585025 ssh_runner.go:195] Run: cat /version.json
	I1205 20:31:47.121750  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:47.124146  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.124387  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.124503  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:47.124530  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.124723  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:47.124745  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.124745  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:47.124922  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:47.124933  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:47.125086  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:47.125126  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:47.125227  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:31:47.125505  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:47.125653  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:31:47.221731  585025 ssh_runner.go:195] Run: systemctl --version
	I1205 20:31:47.228177  585025 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:31:47.377695  585025 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:31:47.384534  585025 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:31:47.384623  585025 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:31:47.402354  585025 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:31:47.402388  585025 start.go:495] detecting cgroup driver to use...
	I1205 20:31:47.402454  585025 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:31:47.426593  585025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:31:47.443953  585025 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:31:47.444011  585025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:31:47.461107  585025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:31:47.477872  585025 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:31:47.617097  585025 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:31:47.780021  585025 docker.go:233] disabling docker service ...
	I1205 20:31:47.780140  585025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:31:47.795745  585025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:31:47.809573  585025 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:31:47.959910  585025 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:31:48.081465  585025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:31:48.096513  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:31:48.116342  585025 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:31:48.116409  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.128016  585025 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:31:48.128095  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.139511  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.151241  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.162858  585025 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:31:48.174755  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.185958  585025 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.203724  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.215682  585025 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:31:48.226478  585025 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:31:48.226551  585025 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:31:48.242781  585025 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:31:48.254921  585025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:48.373925  585025 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:31:48.471515  585025 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:31:48.471625  585025 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:31:48.477640  585025 start.go:563] Will wait 60s for crictl version
	I1205 20:31:48.477707  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:48.481862  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:31:48.521367  585025 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:31:48.521465  585025 ssh_runner.go:195] Run: crio --version
	I1205 20:31:48.552343  585025 ssh_runner.go:195] Run: crio --version
	I1205 20:31:48.583089  585025 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:31:48.043043  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:50.043172  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:48.584504  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetIP
	I1205 20:31:48.587210  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:48.587539  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:48.587568  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:48.587788  585025 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1205 20:31:48.592190  585025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:48.606434  585025 kubeadm.go:883] updating cluster {Name:no-preload-816185 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-816185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.37 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:31:48.606605  585025 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:31:48.606666  585025 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:48.642948  585025 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 20:31:48.642978  585025 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 20:31:48.643061  585025 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:48.643116  585025 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:48.643092  585025 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:48.643168  585025 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:48.643075  585025 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:48.643116  585025 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:48.643248  585025 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1205 20:31:48.643119  585025 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:48.644692  585025 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:48.644712  585025 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1205 20:31:48.644694  585025 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:48.644798  585025 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:48.644800  585025 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:48.644824  585025 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:48.644858  585025 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:48.644824  585025 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:48.811007  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:48.819346  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:48.859678  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1205 20:31:48.864065  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:48.864191  585025 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1205 20:31:48.864249  585025 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:48.864310  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:48.883959  585025 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1205 20:31:48.884022  585025 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:48.884078  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:48.902180  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:48.918167  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:48.946617  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:49.039706  585025 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1205 20:31:49.039760  585025 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:49.039783  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:49.039808  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:49.039869  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:49.039887  585025 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1205 20:31:49.039913  585025 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:49.039938  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:49.039947  585025 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1205 20:31:49.039969  585025 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:49.040001  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:49.040002  585025 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1205 20:31:49.040026  585025 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:49.040069  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:49.098900  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:49.098990  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:49.105551  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:49.105588  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:49.105612  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:49.105646  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:49.201473  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:49.218211  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:49.257277  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:49.257335  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:49.257345  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:49.257479  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:49.316037  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1205 20:31:49.316135  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:49.316159  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 20:31:49.356780  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1205 20:31:49.356906  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1205 20:31:49.382843  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:49.405772  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:49.405863  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:49.428491  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1205 20:31:49.428541  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1205 20:31:49.428563  585025 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 20:31:49.428587  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1205 20:31:49.428611  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 20:31:49.428648  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 20:31:49.487794  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1205 20:31:49.487825  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1205 20:31:49.487893  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1205 20:31:49.487917  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1205 20:31:49.487927  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 20:31:49.488022  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 20:31:49.830311  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:47.219913  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:47.720441  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:48.220220  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:48.719997  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:49.219843  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:49.719591  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:50.220132  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:50.719528  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:51.219674  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:51.720234  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:52.230527  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:31:52.230575  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:52.543415  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:55.042668  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:52.150499  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.721854606s)
	I1205 20:31:52.150547  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1205 20:31:52.150573  585025 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1205 20:31:52.150588  585025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.721911838s)
	I1205 20:31:52.150623  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1205 20:31:52.150627  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1205 20:31:52.150697  585025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: (2.662646854s)
	I1205 20:31:52.150727  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1205 20:31:52.150752  585025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.662648047s)
	I1205 20:31:52.150776  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1205 20:31:52.150785  585025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.662799282s)
	I1205 20:31:52.150804  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1205 20:31:52.150834  585025 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.320487562s)
	I1205 20:31:52.150874  585025 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1205 20:31:52.150907  585025 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:52.150943  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:55.858372  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.707687772s)
	I1205 20:31:55.858414  585025 ssh_runner.go:235] Completed: which crictl: (3.707446137s)
	I1205 20:31:55.858498  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:55.858426  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1205 20:31:55.858580  585025 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 20:31:55.858640  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 20:31:55.901375  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:52.219602  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:52.719522  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:53.220117  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:53.720426  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:54.220177  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:54.720100  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:55.219569  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:55.719796  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:56.219490  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:56.720420  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:57.231370  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:31:57.231415  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:57.612431  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": read tcp 192.168.50.1:36198->192.168.50.96:8444: read: connection reset by peer
	I1205 20:31:57.727638  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:57.728368  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": dial tcp 192.168.50.96:8444: connect: connection refused
	I1205 20:31:57.042989  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:59.043517  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:57.843623  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.984954959s)
	I1205 20:31:57.843662  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1205 20:31:57.843683  585025 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 20:31:57.843731  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 20:31:57.843732  585025 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.942323285s)
	I1205 20:31:57.843821  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:32:00.030765  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.186998467s)
	I1205 20:32:00.030810  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1205 20:32:00.030840  585025 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 20:32:00.030846  585025 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.18699947s)
	I1205 20:32:00.030897  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 20:32:00.030906  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 20:32:00.031026  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:31:57.219497  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:57.720337  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:58.219807  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:58.720112  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:59.219949  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:59.719626  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:00.219871  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:00.719466  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:01.219491  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:01.719760  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:58.227807  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:01.044658  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:03.542453  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:05.542887  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:01.486433  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.455500806s)
	I1205 20:32:01.486479  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1205 20:32:01.486512  585025 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1205 20:32:01.486513  585025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.455460879s)
	I1205 20:32:01.486589  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1205 20:32:01.486592  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1205 20:32:03.658906  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.172262326s)
	I1205 20:32:03.658947  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1205 20:32:03.658979  585025 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:32:03.659024  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:32:04.304774  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 20:32:04.304825  585025 cache_images.go:123] Successfully loaded all cached images
	I1205 20:32:04.304832  585025 cache_images.go:92] duration metric: took 15.661840579s to LoadCachedImages
	I1205 20:32:04.304846  585025 kubeadm.go:934] updating node { 192.168.61.37 8443 v1.31.2 crio true true} ...
	I1205 20:32:04.304983  585025 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-816185 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.37
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-816185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:32:04.305057  585025 ssh_runner.go:195] Run: crio config
	I1205 20:32:04.350303  585025 cni.go:84] Creating CNI manager for ""
	I1205 20:32:04.350332  585025 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:32:04.350352  585025 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:32:04.350383  585025 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.37 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-816185 NodeName:no-preload-816185 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.37"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.37 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:32:04.350534  585025 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.37
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-816185"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.37"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.37"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:32:04.350618  585025 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:32:04.362733  585025 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:32:04.362815  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:32:04.374219  585025 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1205 20:32:04.392626  585025 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:32:04.409943  585025 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I1205 20:32:04.428180  585025 ssh_runner.go:195] Run: grep 192.168.61.37	control-plane.minikube.internal$ /etc/hosts
	I1205 20:32:04.432433  585025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.37	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:32:04.447274  585025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:32:04.591755  585025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:32:04.609441  585025 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185 for IP: 192.168.61.37
	I1205 20:32:04.609472  585025 certs.go:194] generating shared ca certs ...
	I1205 20:32:04.609494  585025 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:32:04.609664  585025 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:32:04.609729  585025 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:32:04.609745  585025 certs.go:256] generating profile certs ...
	I1205 20:32:04.609910  585025 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/client.key
	I1205 20:32:04.609991  585025 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/apiserver.key.e9b85612
	I1205 20:32:04.610027  585025 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/proxy-client.key
	I1205 20:32:04.610146  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:32:04.610173  585025 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:32:04.610182  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:32:04.610216  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:32:04.610264  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:32:04.610313  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:32:04.610377  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:32:04.611264  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:32:04.642976  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:32:04.679840  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:32:04.707526  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:32:04.746333  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 20:32:04.782671  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:32:04.819333  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:32:04.845567  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:32:04.870304  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:32:04.894597  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:32:04.918482  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:32:04.942992  585025 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:32:04.960576  585025 ssh_runner.go:195] Run: openssl version
	I1205 20:32:04.966908  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:32:04.978238  585025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:32:04.982959  585025 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:32:04.983023  585025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:32:04.989070  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:32:05.000979  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:32:05.012901  585025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:32:05.017583  585025 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:32:05.018169  585025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:32:05.025450  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:32:05.037419  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:32:05.050366  585025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:32:05.055211  585025 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:32:05.055255  585025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:32:05.061388  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:32:05.074182  585025 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:32:05.079129  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:32:05.085580  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:32:05.091938  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:32:05.099557  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:32:05.105756  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:32:05.112019  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:32:05.118426  585025 kubeadm.go:392] StartCluster: {Name:no-preload-816185 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-816185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.37 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:32:05.118540  585025 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:32:05.118622  585025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:32:05.162731  585025 cri.go:89] found id: ""
	I1205 20:32:05.162821  585025 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:32:05.174100  585025 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 20:32:05.174127  585025 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 20:32:05.174181  585025 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:32:05.184949  585025 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:32:05.186127  585025 kubeconfig.go:125] found "no-preload-816185" server: "https://192.168.61.37:8443"
	I1205 20:32:05.188601  585025 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:32:05.198779  585025 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.37
	I1205 20:32:05.198815  585025 kubeadm.go:1160] stopping kube-system containers ...
	I1205 20:32:05.198828  585025 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:32:05.198881  585025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:32:05.241175  585025 cri.go:89] found id: ""
	I1205 20:32:05.241247  585025 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:32:05.259698  585025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:32:05.270282  585025 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:32:05.270310  585025 kubeadm.go:157] found existing configuration files:
	
	I1205 20:32:05.270370  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:32:05.280440  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:32:05.280519  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:32:05.290825  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:32:05.300680  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:32:05.300745  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:32:05.311108  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:32:05.320854  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:32:05.320918  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:32:05.331099  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:32:05.340948  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:32:05.341017  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:32:05.351280  585025 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:32:05.361567  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:05.477138  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:02.220337  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:02.720145  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:03.219463  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:03.719913  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:04.219813  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:04.719940  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:05.219830  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:05.720324  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:06.220287  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:06.719584  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:03.228372  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:32:03.228433  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:08.042416  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:10.043011  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:06.259256  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:06.483460  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:06.557633  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:06.666782  585025 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:32:06.666885  585025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:07.167840  585025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:07.667069  585025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:07.701559  585025 api_server.go:72] duration metric: took 1.034769472s to wait for apiserver process to appear ...
	I1205 20:32:07.701592  585025 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:32:07.701612  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:10.640462  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:32:10.640498  585025 api_server.go:103] status: https://192.168.61.37:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:32:10.640521  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:10.647093  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:32:10.647118  585025 api_server.go:103] status: https://192.168.61.37:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:32:10.702286  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:10.711497  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:10.711528  585025 api_server.go:103] status: https://192.168.61.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:07.219989  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:07.720289  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:08.220381  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:08.719947  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:09.219838  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:09.719666  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:10.219756  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:10.720312  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:11.220369  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:11.720004  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:11.202247  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:11.206625  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:11.206650  585025 api_server.go:103] status: https://192.168.61.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:11.702760  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:11.718941  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:11.718974  585025 api_server.go:103] status: https://192.168.61.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:12.202567  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:12.207589  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 200:
	ok
	I1205 20:32:12.214275  585025 api_server.go:141] control plane version: v1.31.2
	I1205 20:32:12.214304  585025 api_server.go:131] duration metric: took 4.512704501s to wait for apiserver health ...
	I1205 20:32:12.214314  585025 cni.go:84] Creating CNI manager for ""
	I1205 20:32:12.214321  585025 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:32:12.216193  585025 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:32:08.229499  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:32:08.229544  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:12.545378  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:15.043628  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:12.217640  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:32:12.241907  585025 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:32:12.262114  585025 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:32:12.275246  585025 system_pods.go:59] 8 kube-system pods found
	I1205 20:32:12.275296  585025 system_pods.go:61] "coredns-7c65d6cfc9-j2hr2" [9ce413ab-c304-40dd-af68-80f15db0e2ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:32:12.275308  585025 system_pods.go:61] "etcd-no-preload-816185" [ddc20062-02d9-4f9d-a2fb-fa2c7d6aa1cc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:32:12.275319  585025 system_pods.go:61] "kube-apiserver-no-preload-816185" [07ff76f2-b05e-4434-b8f9-448bc200507a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:32:12.275328  585025 system_pods.go:61] "kube-controller-manager-no-preload-816185" [7c701058-791a-4097-a913-f6989a791067] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:32:12.275340  585025 system_pods.go:61] "kube-proxy-rjp4j" [340e9ccc-0290-4d3d-829c-44ad65410f3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:32:12.275348  585025 system_pods.go:61] "kube-scheduler-no-preload-816185" [c2f3b04c-9e3a-4060-a6d0-fb9eb2aa5e55] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 20:32:12.275359  585025 system_pods.go:61] "metrics-server-6867b74b74-vjwq2" [47ff24fe-0edb-4d06-b280-a0d965b25dae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:32:12.275367  585025 system_pods.go:61] "storage-provisioner" [bd385e87-56ea-417c-a4a8-b8a6e4f94114] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:32:12.275376  585025 system_pods.go:74] duration metric: took 13.23725ms to wait for pod list to return data ...
	I1205 20:32:12.275387  585025 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:32:12.279719  585025 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:32:12.279746  585025 node_conditions.go:123] node cpu capacity is 2
	I1205 20:32:12.279755  585025 node_conditions.go:105] duration metric: took 4.364464ms to run NodePressure ...
	I1205 20:32:12.279774  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:12.562221  585025 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 20:32:12.566599  585025 kubeadm.go:739] kubelet initialised
	I1205 20:32:12.566627  585025 kubeadm.go:740] duration metric: took 4.374855ms waiting for restarted kubelet to initialise ...
	I1205 20:32:12.566639  585025 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:32:12.571780  585025 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-j2hr2" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:14.579614  585025 pod_ready.go:103] pod "coredns-7c65d6cfc9-j2hr2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:12.220304  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:12.720348  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:13.219553  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:13.720078  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:14.219614  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:14.719625  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:15.220118  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:15.720577  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:16.220392  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:16.719538  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:13.230519  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:32:13.230567  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:16.061543  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:32:16.061583  585929 api_server.go:103] status: https://192.168.50.96:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:32:16.061603  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:16.078424  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:32:16.078457  585929 api_server.go:103] status: https://192.168.50.96:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:32:16.227852  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:16.553664  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:16.553705  585929 api_server.go:103] status: https://192.168.50.96:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:16.728155  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:16.734800  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:16.734853  585929 api_server.go:103] status: https://192.168.50.96:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:17.228013  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:17.233541  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:17.233577  585929 api_server.go:103] status: https://192.168.50.96:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:17.727878  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:17.736731  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 200:
	ok
	I1205 20:32:17.746474  585929 api_server.go:141] control plane version: v1.31.2
	I1205 20:32:17.746511  585929 api_server.go:131] duration metric: took 41.019245279s to wait for apiserver health ...
	I1205 20:32:17.746523  585929 cni.go:84] Creating CNI manager for ""
	I1205 20:32:17.746531  585929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:32:17.748464  585929 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:32:17.750113  585929 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:32:17.762750  585929 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:32:17.786421  585929 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:32:17.826859  585929 system_pods.go:59] 8 kube-system pods found
	I1205 20:32:17.826918  585929 system_pods.go:61] "coredns-7c65d6cfc9-5drgc" [4adbcbc8-0974-4ed3-90d4-fc7f75ff83b6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:32:17.826934  585929 system_pods.go:61] "etcd-default-k8s-diff-port-942599" [4041a965-abf4-45b3-a180-118601e72573] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:32:17.826946  585929 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-942599" [ae1d7788-4feb-4e02-b0b2-bcaff984ff99] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:32:17.826959  585929 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-942599" [5cfb734e-5a10-4066-95a1-b884817a0aea] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:32:17.826969  585929 system_pods.go:61] "kube-proxy-5vdcq" [be2e18fd-6980-45c9-87a4-f6d1ed31bf7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:32:17.826980  585929 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-942599" [8deda727-a6c3-4523-8755-76217f6a8ddb] Running
	I1205 20:32:17.826989  585929 system_pods.go:61] "metrics-server-6867b74b74-rq8xm" [99b577fd-fbfd-4178-8b06-ef96f118c30b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:32:17.827000  585929 system_pods.go:61] "storage-provisioner" [8a858ec2-dc10-4501-8efa-72e2ea0c7927] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:32:17.827010  585929 system_pods.go:74] duration metric: took 40.565274ms to wait for pod list to return data ...
	I1205 20:32:17.827025  585929 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:32:17.838000  585929 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:32:17.838034  585929 node_conditions.go:123] node cpu capacity is 2
	I1205 20:32:17.838050  585929 node_conditions.go:105] duration metric: took 11.010352ms to run NodePressure ...
	I1205 20:32:17.838075  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:18.215713  585929 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 20:32:18.222162  585929 kubeadm.go:739] kubelet initialised
	I1205 20:32:18.222187  585929 kubeadm.go:740] duration metric: took 6.444578ms waiting for restarted kubelet to initialise ...
	I1205 20:32:18.222199  585929 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:32:18.226988  585929 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:18.235570  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.235600  585929 pod_ready.go:82] duration metric: took 8.582972ms for pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:18.235609  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.235617  585929 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:18.242596  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.242623  585929 pod_ready.go:82] duration metric: took 6.99814ms for pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:18.242634  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.242642  585929 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:18.248351  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.248373  585929 pod_ready.go:82] duration metric: took 5.725371ms for pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:18.248383  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.248390  585929 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:18.258151  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.258174  585929 pod_ready.go:82] duration metric: took 9.778119ms for pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:18.258183  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.258190  585929 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5vdcq" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:18.619579  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "kube-proxy-5vdcq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.619623  585929 pod_ready.go:82] duration metric: took 361.426091ms for pod "kube-proxy-5vdcq" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:18.619638  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "kube-proxy-5vdcq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.619649  585929 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:19.019623  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:19.019655  585929 pod_ready.go:82] duration metric: took 399.997558ms for pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:19.019669  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:19.019676  585929 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:19.420201  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:19.420228  585929 pod_ready.go:82] duration metric: took 400.54576ms for pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:19.420242  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:19.420251  585929 pod_ready.go:39] duration metric: took 1.198040831s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:32:19.420292  585929 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:32:19.434385  585929 ops.go:34] apiserver oom_adj: -16
	I1205 20:32:19.434420  585929 kubeadm.go:597] duration metric: took 45.406934122s to restartPrimaryControlPlane
	I1205 20:32:19.434434  585929 kubeadm.go:394] duration metric: took 45.464483994s to StartCluster
	I1205 20:32:19.434460  585929 settings.go:142] acquiring lock: {Name:mk53b9e6d652790a330d8f10370186624dd74692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:32:19.434560  585929 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:32:19.436299  585929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:32:19.436590  585929 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.96 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:32:19.436736  585929 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 20:32:19.436837  585929 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-942599"
	I1205 20:32:19.436858  585929 config.go:182] Loaded profile config "default-k8s-diff-port-942599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:32:19.436873  585929 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-942599"
	W1205 20:32:19.436883  585929 addons.go:243] addon storage-provisioner should already be in state true
	I1205 20:32:19.436923  585929 host.go:66] Checking if "default-k8s-diff-port-942599" exists ...
	I1205 20:32:19.436938  585929 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-942599"
	I1205 20:32:19.436974  585929 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-942599"
	I1205 20:32:19.436922  585929 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-942599"
	I1205 20:32:19.437024  585929 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-942599"
	W1205 20:32:19.437051  585929 addons.go:243] addon metrics-server should already be in state true
	I1205 20:32:19.437090  585929 host.go:66] Checking if "default-k8s-diff-port-942599" exists ...
	I1205 20:32:19.437365  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.437407  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.437452  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.437480  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.437509  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.437514  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.438584  585929 out.go:177] * Verifying Kubernetes components...
	I1205 20:32:19.440376  585929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:32:19.453761  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I1205 20:32:19.453782  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44087
	I1205 20:32:19.453767  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33855
	I1205 20:32:19.454289  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.454441  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.454451  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.454851  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.454871  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.454981  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.454981  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.455005  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.455021  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.455286  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.455350  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.455409  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.455461  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetState
	I1205 20:32:19.455910  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.455927  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.455958  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.455966  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.458587  585929 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-942599"
	W1205 20:32:19.458605  585929 addons.go:243] addon default-storageclass should already be in state true
	I1205 20:32:19.458627  585929 host.go:66] Checking if "default-k8s-diff-port-942599" exists ...
	I1205 20:32:19.458955  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.458995  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.472175  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37545
	I1205 20:32:19.472667  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.472927  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37223
	I1205 20:32:19.473215  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.473233  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.473401  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40929
	I1205 20:32:19.473570  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.473608  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.473839  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.473933  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetState
	I1205 20:32:19.474155  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.474187  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.474290  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.474313  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.474546  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.474638  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.474711  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetState
	I1205 20:32:19.475267  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.475320  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.476105  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:32:19.476447  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:32:19.478117  585929 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:32:19.478117  585929 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:32:17.545165  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:20.044285  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:17.079986  585025 pod_ready.go:93] pod "coredns-7c65d6cfc9-j2hr2" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:17.080014  585025 pod_ready.go:82] duration metric: took 4.508210865s for pod "coredns-7c65d6cfc9-j2hr2" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:17.080025  585025 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:19.086070  585025 pod_ready.go:103] pod "etcd-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:20.587742  585025 pod_ready.go:93] pod "etcd-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:20.587775  585025 pod_ready.go:82] duration metric: took 3.507742173s for pod "etcd-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:20.587789  585025 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:19.479638  585929 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:32:19.479658  585929 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:32:19.479686  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:32:19.479719  585929 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:32:19.479737  585929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:32:19.479750  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:32:19.483208  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.483350  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.483773  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:32:19.483790  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.483873  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:32:19.483887  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.483936  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:32:19.484123  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:32:19.484166  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:32:19.484294  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:32:19.484324  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:32:19.484438  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:32:19.484456  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:32:19.484571  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:32:19.533651  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34539
	I1205 20:32:19.534273  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.534802  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.534833  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.535282  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.535535  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetState
	I1205 20:32:19.538221  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:32:19.538787  585929 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:32:19.538804  585929 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:32:19.538825  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:32:19.541876  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.542318  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:32:19.542354  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.542556  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:32:19.542744  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:32:19.542944  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:32:19.543129  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:32:19.630282  585929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:32:19.652591  585929 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-942599" to be "Ready" ...
	I1205 20:32:19.719058  585929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:32:19.810931  585929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:32:19.812113  585929 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:32:19.812136  585929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:32:19.875725  585929 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:32:19.875761  585929 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:32:19.946353  585929 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:32:19.946390  585929 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:32:20.010445  585929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:32:20.231055  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:20.231082  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:20.231425  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:20.231454  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:20.231469  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:20.231478  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:20.231476  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Closing plugin on server side
	I1205 20:32:20.231764  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:20.231784  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:20.231783  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Closing plugin on server side
	I1205 20:32:20.247021  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:20.247051  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:20.247463  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:20.247490  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:20.247488  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Closing plugin on server side
	I1205 20:32:21.074948  585929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.263976727s)
	I1205 20:32:21.075015  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:21.075029  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:21.075397  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:21.075438  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:21.075449  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:21.075457  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:21.075745  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Closing plugin on server side
	I1205 20:32:21.075766  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:21.075785  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:21.134215  585929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.123724822s)
	I1205 20:32:21.134271  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:21.134285  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:21.134588  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:21.134604  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:21.134612  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:21.134615  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Closing plugin on server side
	I1205 20:32:21.134620  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:21.134878  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:21.134891  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:21.134904  585929 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-942599"
	I1205 20:32:21.136817  585929 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:32:17.220437  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:17.220539  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:17.272666  585602 cri.go:89] found id: ""
	I1205 20:32:17.272702  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.272716  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:17.272723  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:17.272797  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:17.314947  585602 cri.go:89] found id: ""
	I1205 20:32:17.314977  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.314989  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:17.314996  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:17.315061  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:17.354511  585602 cri.go:89] found id: ""
	I1205 20:32:17.354548  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.354561  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:17.354571  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:17.354640  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:17.393711  585602 cri.go:89] found id: ""
	I1205 20:32:17.393745  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.393759  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:17.393768  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:17.393836  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:17.434493  585602 cri.go:89] found id: ""
	I1205 20:32:17.434526  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.434535  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:17.434541  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:17.434602  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:17.476201  585602 cri.go:89] found id: ""
	I1205 20:32:17.476235  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.476245  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:17.476253  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:17.476341  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:17.516709  585602 cri.go:89] found id: ""
	I1205 20:32:17.516745  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.516755  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:17.516762  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:17.516818  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:17.557270  585602 cri.go:89] found id: ""
	I1205 20:32:17.557305  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.557314  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:17.557324  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:17.557348  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:17.606494  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:17.606540  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:17.681372  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:17.681412  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:17.696778  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:17.696816  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:17.839655  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:17.839679  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:17.839717  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:20.423552  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:20.439794  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:20.439875  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:20.482820  585602 cri.go:89] found id: ""
	I1205 20:32:20.482866  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.482880  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:20.482888  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:20.482958  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:20.523590  585602 cri.go:89] found id: ""
	I1205 20:32:20.523629  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.523641  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:20.523649  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:20.523727  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:20.601603  585602 cri.go:89] found id: ""
	I1205 20:32:20.601638  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.601648  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:20.601656  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:20.601728  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:20.643927  585602 cri.go:89] found id: ""
	I1205 20:32:20.643959  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.643972  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:20.643981  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:20.644054  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:20.690935  585602 cri.go:89] found id: ""
	I1205 20:32:20.690964  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.690975  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:20.690984  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:20.691054  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:20.728367  585602 cri.go:89] found id: ""
	I1205 20:32:20.728400  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.728412  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:20.728420  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:20.728489  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:20.766529  585602 cri.go:89] found id: ""
	I1205 20:32:20.766562  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.766571  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:20.766578  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:20.766657  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:20.805641  585602 cri.go:89] found id: ""
	I1205 20:32:20.805680  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.805690  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:20.805701  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:20.805718  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:20.884460  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:20.884495  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:20.884514  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:20.998367  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:20.998429  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:21.041210  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:21.041247  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:21.103519  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:21.103557  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:21.138175  585929 addons.go:510] duration metric: took 1.701453382s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:32:21.657269  585929 node_ready.go:53] node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:22.541880  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:24.543481  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:22.595422  585025 pod_ready.go:103] pod "kube-apiserver-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:23.594392  585025 pod_ready.go:93] pod "kube-apiserver-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:23.594419  585025 pod_ready.go:82] duration metric: took 3.006622534s for pod "kube-apiserver-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:23.594430  585025 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:25.601616  585025 pod_ready.go:103] pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:23.619187  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:23.633782  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:23.633872  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:23.679994  585602 cri.go:89] found id: ""
	I1205 20:32:23.680023  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.680032  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:23.680038  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:23.680094  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:23.718362  585602 cri.go:89] found id: ""
	I1205 20:32:23.718425  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.718439  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:23.718447  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:23.718520  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:23.758457  585602 cri.go:89] found id: ""
	I1205 20:32:23.758491  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.758500  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:23.758506  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:23.758558  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:23.794612  585602 cri.go:89] found id: ""
	I1205 20:32:23.794649  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.794662  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:23.794671  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:23.794738  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:23.832309  585602 cri.go:89] found id: ""
	I1205 20:32:23.832341  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.832354  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:23.832361  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:23.832421  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:23.868441  585602 cri.go:89] found id: ""
	I1205 20:32:23.868472  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.868484  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:23.868492  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:23.868573  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:23.902996  585602 cri.go:89] found id: ""
	I1205 20:32:23.903025  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.903036  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:23.903050  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:23.903115  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:23.939830  585602 cri.go:89] found id: ""
	I1205 20:32:23.939865  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.939879  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:23.939892  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:23.939909  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:23.992310  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:23.992354  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:24.007378  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:24.007414  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:24.077567  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:24.077594  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:24.077608  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:24.165120  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:24.165163  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:26.711674  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:26.726923  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:26.727008  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:26.763519  585602 cri.go:89] found id: ""
	I1205 20:32:26.763554  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.763563  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:26.763570  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:26.763628  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:26.802600  585602 cri.go:89] found id: ""
	I1205 20:32:26.802635  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.802644  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:26.802650  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:26.802705  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:26.839920  585602 cri.go:89] found id: ""
	I1205 20:32:26.839967  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.839981  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:26.839989  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:26.840076  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:24.157515  585929 node_ready.go:53] node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:26.657197  585929 node_ready.go:53] node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:27.656811  585929 node_ready.go:49] node "default-k8s-diff-port-942599" has status "Ready":"True"
	I1205 20:32:27.656842  585929 node_ready.go:38] duration metric: took 8.004215314s for node "default-k8s-diff-port-942599" to be "Ready" ...
	I1205 20:32:27.656854  585929 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:32:27.662792  585929 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.668485  585929 pod_ready.go:93] pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:27.668510  585929 pod_ready.go:82] duration metric: took 5.690516ms for pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.668521  585929 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:26.543536  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:28.544214  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:27.101514  585025 pod_ready.go:93] pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:27.101540  585025 pod_ready.go:82] duration metric: took 3.507102769s for pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.101551  585025 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rjp4j" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.108084  585025 pod_ready.go:93] pod "kube-proxy-rjp4j" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:27.108116  585025 pod_ready.go:82] duration metric: took 6.557141ms for pod "kube-proxy-rjp4j" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.108131  585025 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.112915  585025 pod_ready.go:93] pod "kube-scheduler-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:27.112942  585025 pod_ready.go:82] duration metric: took 4.801285ms for pod "kube-scheduler-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.112955  585025 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.119094  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:26.876377  585602 cri.go:89] found id: ""
	I1205 20:32:26.876406  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.876416  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:26.876422  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:26.876491  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:26.913817  585602 cri.go:89] found id: ""
	I1205 20:32:26.913845  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.913854  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:26.913862  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:26.913936  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:26.955739  585602 cri.go:89] found id: ""
	I1205 20:32:26.955775  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.955788  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:26.955798  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:26.955863  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:26.996191  585602 cri.go:89] found id: ""
	I1205 20:32:26.996223  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.996234  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:26.996242  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:26.996341  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:27.040905  585602 cri.go:89] found id: ""
	I1205 20:32:27.040935  585602 logs.go:282] 0 containers: []
	W1205 20:32:27.040947  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:27.040958  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:27.040973  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:27.098103  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:27.098140  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:27.116538  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:27.116574  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:27.204154  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:27.204187  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:27.204208  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:27.300380  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:27.300431  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:29.840944  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:29.855784  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:29.855869  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:29.893728  585602 cri.go:89] found id: ""
	I1205 20:32:29.893765  585602 logs.go:282] 0 containers: []
	W1205 20:32:29.893777  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:29.893786  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:29.893867  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:29.930138  585602 cri.go:89] found id: ""
	I1205 20:32:29.930176  585602 logs.go:282] 0 containers: []
	W1205 20:32:29.930186  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:29.930193  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:29.930248  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:29.966340  585602 cri.go:89] found id: ""
	I1205 20:32:29.966371  585602 logs.go:282] 0 containers: []
	W1205 20:32:29.966380  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:29.966387  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:29.966463  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:30.003868  585602 cri.go:89] found id: ""
	I1205 20:32:30.003900  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.003920  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:30.003928  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:30.004001  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:30.044332  585602 cri.go:89] found id: ""
	I1205 20:32:30.044363  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.044373  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:30.044380  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:30.044445  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:30.088044  585602 cri.go:89] found id: ""
	I1205 20:32:30.088085  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.088098  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:30.088106  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:30.088173  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:30.124221  585602 cri.go:89] found id: ""
	I1205 20:32:30.124248  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.124258  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:30.124285  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:30.124357  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:30.162092  585602 cri.go:89] found id: ""
	I1205 20:32:30.162121  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.162133  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:30.162146  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:30.162162  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:30.218526  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:30.218567  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:30.232240  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:30.232292  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:30.308228  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:30.308260  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:30.308296  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:30.389348  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:30.389391  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:29.177093  585929 pod_ready.go:93] pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:29.177118  585929 pod_ready.go:82] duration metric: took 1.508590352s for pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.177129  585929 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.185839  585929 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:29.185869  585929 pod_ready.go:82] duration metric: took 8.733028ms for pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.185883  585929 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.191924  585929 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:29.191950  585929 pod_ready.go:82] duration metric: took 6.059525ms for pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.191963  585929 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5vdcq" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.256484  585929 pod_ready.go:93] pod "kube-proxy-5vdcq" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:29.256510  585929 pod_ready.go:82] duration metric: took 64.540117ms for pod "kube-proxy-5vdcq" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.256521  585929 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.656933  585929 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:29.656961  585929 pod_ready.go:82] duration metric: took 400.432279ms for pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.656972  585929 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:31.664326  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:31.043630  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:33.044035  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:35.542861  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:31.120200  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:33.120303  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:35.120532  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:32.934497  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:32.949404  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:32.949488  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:33.006117  585602 cri.go:89] found id: ""
	I1205 20:32:33.006148  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.006157  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:33.006163  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:33.006231  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:33.064907  585602 cri.go:89] found id: ""
	I1205 20:32:33.064945  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.064958  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:33.064966  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:33.065031  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:33.101268  585602 cri.go:89] found id: ""
	I1205 20:32:33.101295  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.101304  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:33.101310  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:33.101378  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:33.141705  585602 cri.go:89] found id: ""
	I1205 20:32:33.141733  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.141743  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:33.141750  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:33.141810  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:33.180983  585602 cri.go:89] found id: ""
	I1205 20:32:33.181011  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.181020  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:33.181026  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:33.181086  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:33.220742  585602 cri.go:89] found id: ""
	I1205 20:32:33.220779  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.220791  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:33.220799  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:33.220871  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:33.255980  585602 cri.go:89] found id: ""
	I1205 20:32:33.256009  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.256017  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:33.256024  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:33.256080  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:33.292978  585602 cri.go:89] found id: ""
	I1205 20:32:33.293005  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.293013  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:33.293023  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:33.293034  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:33.347167  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:33.347213  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:33.361367  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:33.361408  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:33.435871  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:33.435915  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:33.435932  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:33.518835  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:33.518880  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:36.066359  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:36.080867  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:36.080947  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:36.117647  585602 cri.go:89] found id: ""
	I1205 20:32:36.117678  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.117689  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:36.117697  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:36.117763  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:36.154376  585602 cri.go:89] found id: ""
	I1205 20:32:36.154412  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.154428  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:36.154436  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:36.154498  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:36.193225  585602 cri.go:89] found id: ""
	I1205 20:32:36.193261  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.193274  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:36.193282  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:36.193347  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:36.230717  585602 cri.go:89] found id: ""
	I1205 20:32:36.230748  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.230758  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:36.230764  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:36.230817  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:36.270186  585602 cri.go:89] found id: ""
	I1205 20:32:36.270238  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.270252  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:36.270262  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:36.270340  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:36.306378  585602 cri.go:89] found id: ""
	I1205 20:32:36.306425  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.306438  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:36.306447  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:36.306531  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:36.342256  585602 cri.go:89] found id: ""
	I1205 20:32:36.342289  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.342300  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:36.342306  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:36.342380  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:36.380684  585602 cri.go:89] found id: ""
	I1205 20:32:36.380718  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.380732  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:36.380745  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:36.380768  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:36.436066  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:36.436109  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:36.450255  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:36.450285  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:36.521857  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:36.521883  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:36.521897  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:36.608349  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:36.608395  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:34.163870  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:36.164890  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:38.042889  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:40.543140  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:37.619863  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:40.120462  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:39.157366  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:39.171267  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:39.171357  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:39.214459  585602 cri.go:89] found id: ""
	I1205 20:32:39.214490  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.214520  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:39.214528  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:39.214583  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:39.250312  585602 cri.go:89] found id: ""
	I1205 20:32:39.250352  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.250366  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:39.250375  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:39.250437  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:39.286891  585602 cri.go:89] found id: ""
	I1205 20:32:39.286932  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.286944  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:39.286952  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:39.287019  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:39.323923  585602 cri.go:89] found id: ""
	I1205 20:32:39.323958  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.323970  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:39.323979  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:39.324053  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:39.360280  585602 cri.go:89] found id: ""
	I1205 20:32:39.360322  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.360331  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:39.360337  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:39.360403  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:39.397599  585602 cri.go:89] found id: ""
	I1205 20:32:39.397637  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.397650  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:39.397659  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:39.397731  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:39.435132  585602 cri.go:89] found id: ""
	I1205 20:32:39.435159  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.435168  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:39.435174  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:39.435241  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:39.470653  585602 cri.go:89] found id: ""
	I1205 20:32:39.470682  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.470690  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:39.470700  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:39.470714  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:39.511382  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:39.511413  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:39.563955  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:39.563994  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:39.578015  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:39.578044  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:39.658505  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:39.658535  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:39.658550  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:38.665320  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:41.165054  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:42.545231  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:45.042231  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:42.620687  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:45.120915  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:42.248607  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:42.263605  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:42.263688  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:42.305480  585602 cri.go:89] found id: ""
	I1205 20:32:42.305508  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.305519  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:42.305527  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:42.305595  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:42.339969  585602 cri.go:89] found id: ""
	I1205 20:32:42.340001  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.340010  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:42.340016  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:42.340090  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:42.381594  585602 cri.go:89] found id: ""
	I1205 20:32:42.381630  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.381643  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:42.381651  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:42.381771  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:42.435039  585602 cri.go:89] found id: ""
	I1205 20:32:42.435072  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.435085  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:42.435093  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:42.435162  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:42.470567  585602 cri.go:89] found id: ""
	I1205 20:32:42.470595  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.470604  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:42.470610  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:42.470674  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:42.510695  585602 cri.go:89] found id: ""
	I1205 20:32:42.510723  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.510731  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:42.510738  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:42.510793  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:42.547687  585602 cri.go:89] found id: ""
	I1205 20:32:42.547711  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.547718  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:42.547735  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:42.547784  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:42.587160  585602 cri.go:89] found id: ""
	I1205 20:32:42.587191  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.587199  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:42.587211  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:42.587225  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:42.669543  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:42.669587  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:42.717795  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:42.717833  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:42.772644  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:42.772696  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:42.788443  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:42.788480  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:42.861560  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:45.362758  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:45.377178  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:45.377266  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:45.413055  585602 cri.go:89] found id: ""
	I1205 20:32:45.413088  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.413102  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:45.413111  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:45.413176  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:45.453769  585602 cri.go:89] found id: ""
	I1205 20:32:45.453799  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.453808  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:45.453813  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:45.453879  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:45.499481  585602 cri.go:89] found id: ""
	I1205 20:32:45.499511  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.499522  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:45.499531  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:45.499598  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:45.537603  585602 cri.go:89] found id: ""
	I1205 20:32:45.537638  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.537647  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:45.537653  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:45.537707  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:45.572430  585602 cri.go:89] found id: ""
	I1205 20:32:45.572463  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.572471  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:45.572479  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:45.572556  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:45.610349  585602 cri.go:89] found id: ""
	I1205 20:32:45.610387  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.610398  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:45.610406  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:45.610476  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:45.649983  585602 cri.go:89] found id: ""
	I1205 20:32:45.650018  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.650031  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:45.650038  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:45.650113  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:45.689068  585602 cri.go:89] found id: ""
	I1205 20:32:45.689099  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.689107  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:45.689118  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:45.689131  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:45.743715  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:45.743758  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:45.759803  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:45.759834  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:45.835107  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:45.835133  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:45.835146  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:45.914590  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:45.914632  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:43.665616  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:46.164064  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:47.045269  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:49.544519  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:47.619099  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:49.627948  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:48.456633  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:48.475011  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:48.475086  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:48.512878  585602 cri.go:89] found id: ""
	I1205 20:32:48.512913  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.512925  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:48.512933  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:48.513002  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:48.551708  585602 cri.go:89] found id: ""
	I1205 20:32:48.551737  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.551744  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:48.551751  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:48.551805  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:48.590765  585602 cri.go:89] found id: ""
	I1205 20:32:48.590791  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.590800  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:48.590806  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:48.590859  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:48.629447  585602 cri.go:89] found id: ""
	I1205 20:32:48.629473  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.629481  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:48.629487  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:48.629540  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:48.667299  585602 cri.go:89] found id: ""
	I1205 20:32:48.667329  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.667339  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:48.667347  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:48.667414  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:48.703771  585602 cri.go:89] found id: ""
	I1205 20:32:48.703816  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.703830  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:48.703841  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:48.703911  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:48.747064  585602 cri.go:89] found id: ""
	I1205 20:32:48.747098  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.747111  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:48.747118  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:48.747186  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:48.786608  585602 cri.go:89] found id: ""
	I1205 20:32:48.786649  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.786663  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:48.786684  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:48.786700  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:48.860834  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:48.860866  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:48.860881  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:48.944029  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:48.944082  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:48.982249  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:48.982284  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:49.036460  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:49.036509  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:51.556456  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:51.571498  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:51.571590  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:51.616890  585602 cri.go:89] found id: ""
	I1205 20:32:51.616924  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.616934  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:51.616942  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:51.617008  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:51.660397  585602 cri.go:89] found id: ""
	I1205 20:32:51.660433  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.660445  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:51.660453  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:51.660543  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:51.698943  585602 cri.go:89] found id: ""
	I1205 20:32:51.698973  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.698981  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:51.698988  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:51.699041  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:51.737254  585602 cri.go:89] found id: ""
	I1205 20:32:51.737288  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.737297  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:51.737310  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:51.737366  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:51.775560  585602 cri.go:89] found id: ""
	I1205 20:32:51.775592  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.775600  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:51.775606  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:51.775681  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:51.814314  585602 cri.go:89] found id: ""
	I1205 20:32:51.814370  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.814383  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:51.814393  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:51.814464  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:51.849873  585602 cri.go:89] found id: ""
	I1205 20:32:51.849913  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.849935  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:51.849944  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:51.850018  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:48.164562  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:50.664498  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:52.044224  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:54.542721  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:52.118857  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:54.120231  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:51.891360  585602 cri.go:89] found id: ""
	I1205 20:32:51.891388  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.891400  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:51.891412  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:51.891429  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:51.943812  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:51.943854  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:51.959119  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:51.959152  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:52.036014  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:52.036040  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:52.036059  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:52.114080  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:52.114122  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:54.657243  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:54.672319  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:54.672407  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:54.708446  585602 cri.go:89] found id: ""
	I1205 20:32:54.708475  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.708484  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:54.708491  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:54.708569  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:54.747309  585602 cri.go:89] found id: ""
	I1205 20:32:54.747347  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.747359  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:54.747370  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:54.747451  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:54.790742  585602 cri.go:89] found id: ""
	I1205 20:32:54.790772  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.790781  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:54.790787  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:54.790853  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:54.828857  585602 cri.go:89] found id: ""
	I1205 20:32:54.828885  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.828894  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:54.828902  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:54.828964  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:54.867691  585602 cri.go:89] found id: ""
	I1205 20:32:54.867729  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.867740  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:54.867747  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:54.867819  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:54.907216  585602 cri.go:89] found id: ""
	I1205 20:32:54.907242  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.907249  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:54.907256  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:54.907308  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:54.945800  585602 cri.go:89] found id: ""
	I1205 20:32:54.945827  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.945837  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:54.945844  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:54.945895  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:54.993176  585602 cri.go:89] found id: ""
	I1205 20:32:54.993216  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.993228  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:54.993242  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:54.993258  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:55.045797  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:55.045835  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:55.060103  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:55.060136  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:55.129440  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:55.129467  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:55.129485  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:55.214949  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:55.214999  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:53.164619  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:55.663605  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:56.543148  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:58.543374  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:00.543687  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:56.620220  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:58.620759  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:00.626643  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:57.755086  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:57.769533  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:57.769622  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:57.807812  585602 cri.go:89] found id: ""
	I1205 20:32:57.807847  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.807858  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:57.807869  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:57.807941  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:57.846179  585602 cri.go:89] found id: ""
	I1205 20:32:57.846209  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.846223  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:57.846232  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:57.846305  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:57.881438  585602 cri.go:89] found id: ""
	I1205 20:32:57.881473  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.881482  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:57.881496  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:57.881553  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:57.918242  585602 cri.go:89] found id: ""
	I1205 20:32:57.918283  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.918294  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:57.918302  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:57.918378  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:57.962825  585602 cri.go:89] found id: ""
	I1205 20:32:57.962863  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.962873  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:57.962879  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:57.962955  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:58.004655  585602 cri.go:89] found id: ""
	I1205 20:32:58.004699  585602 logs.go:282] 0 containers: []
	W1205 20:32:58.004711  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:58.004731  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:58.004802  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:58.043701  585602 cri.go:89] found id: ""
	I1205 20:32:58.043730  585602 logs.go:282] 0 containers: []
	W1205 20:32:58.043738  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:58.043744  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:58.043802  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:58.081400  585602 cri.go:89] found id: ""
	I1205 20:32:58.081437  585602 logs.go:282] 0 containers: []
	W1205 20:32:58.081450  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:58.081463  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:58.081486  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:58.135531  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:58.135573  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:58.149962  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:58.149998  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:58.227810  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:58.227834  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:58.227849  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:58.308173  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:58.308219  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:00.848019  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:00.863423  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:00.863496  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:00.902526  585602 cri.go:89] found id: ""
	I1205 20:33:00.902553  585602 logs.go:282] 0 containers: []
	W1205 20:33:00.902561  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:00.902567  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:00.902621  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:00.939891  585602 cri.go:89] found id: ""
	I1205 20:33:00.939932  585602 logs.go:282] 0 containers: []
	W1205 20:33:00.939942  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:00.939948  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:00.940022  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:00.981645  585602 cri.go:89] found id: ""
	I1205 20:33:00.981676  585602 logs.go:282] 0 containers: []
	W1205 20:33:00.981684  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:00.981691  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:00.981745  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:01.027753  585602 cri.go:89] found id: ""
	I1205 20:33:01.027780  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.027789  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:01.027795  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:01.027877  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:01.064529  585602 cri.go:89] found id: ""
	I1205 20:33:01.064559  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.064567  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:01.064574  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:01.064628  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:01.102239  585602 cri.go:89] found id: ""
	I1205 20:33:01.102272  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.102281  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:01.102287  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:01.102357  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:01.139723  585602 cri.go:89] found id: ""
	I1205 20:33:01.139760  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.139770  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:01.139778  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:01.139845  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:01.176172  585602 cri.go:89] found id: ""
	I1205 20:33:01.176198  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.176207  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:01.176216  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:01.176231  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:01.230085  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:01.230133  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:01.245574  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:01.245617  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:01.340483  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:01.340520  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:01.340537  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:01.416925  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:01.416972  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:58.164852  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:00.664376  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:02.677134  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:03.042415  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:05.543101  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:03.119783  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:05.120647  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:03.958855  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:03.974024  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:03.974096  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:04.021407  585602 cri.go:89] found id: ""
	I1205 20:33:04.021442  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.021451  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:04.021458  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:04.021523  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:04.063385  585602 cri.go:89] found id: ""
	I1205 20:33:04.063414  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.063423  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:04.063430  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:04.063488  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:04.103693  585602 cri.go:89] found id: ""
	I1205 20:33:04.103735  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.103747  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:04.103756  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:04.103815  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:04.143041  585602 cri.go:89] found id: ""
	I1205 20:33:04.143072  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.143100  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:04.143109  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:04.143179  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:04.180668  585602 cri.go:89] found id: ""
	I1205 20:33:04.180702  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.180712  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:04.180718  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:04.180778  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:04.221848  585602 cri.go:89] found id: ""
	I1205 20:33:04.221885  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.221894  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:04.221901  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:04.222018  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:04.263976  585602 cri.go:89] found id: ""
	I1205 20:33:04.264014  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.264024  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:04.264030  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:04.264097  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:04.298698  585602 cri.go:89] found id: ""
	I1205 20:33:04.298726  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.298737  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:04.298751  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:04.298767  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:04.347604  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:04.347659  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:04.361325  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:04.361361  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:04.437679  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:04.437704  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:04.437720  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:04.520043  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:04.520103  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:05.163317  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:07.165936  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:08.043365  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:10.544442  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:07.122134  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:09.620228  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:07.070687  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:07.085290  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:07.085367  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:07.126233  585602 cri.go:89] found id: ""
	I1205 20:33:07.126265  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.126276  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:07.126285  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:07.126346  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:07.163004  585602 cri.go:89] found id: ""
	I1205 20:33:07.163040  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.163053  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:07.163061  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:07.163126  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:07.201372  585602 cri.go:89] found id: ""
	I1205 20:33:07.201412  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.201425  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:07.201435  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:07.201509  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:07.237762  585602 cri.go:89] found id: ""
	I1205 20:33:07.237795  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.237807  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:07.237815  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:07.237885  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:07.273940  585602 cri.go:89] found id: ""
	I1205 20:33:07.273976  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.273985  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:07.273995  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:07.274057  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:07.311028  585602 cri.go:89] found id: ""
	I1205 20:33:07.311061  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.311070  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:07.311076  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:07.311131  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:07.347386  585602 cri.go:89] found id: ""
	I1205 20:33:07.347422  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.347433  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:07.347441  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:07.347503  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:07.386412  585602 cri.go:89] found id: ""
	I1205 20:33:07.386446  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.386458  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:07.386471  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:07.386489  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:07.430250  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:07.430280  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:07.483936  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:07.483982  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:07.498201  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:07.498236  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:07.576741  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:07.576767  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:07.576780  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:10.164792  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:10.178516  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:10.178596  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:10.215658  585602 cri.go:89] found id: ""
	I1205 20:33:10.215692  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.215702  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:10.215711  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:10.215779  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:10.251632  585602 cri.go:89] found id: ""
	I1205 20:33:10.251671  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.251683  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:10.251691  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:10.251763  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:10.295403  585602 cri.go:89] found id: ""
	I1205 20:33:10.295435  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.295453  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:10.295460  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:10.295513  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:10.329747  585602 cri.go:89] found id: ""
	I1205 20:33:10.329778  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.329787  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:10.329793  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:10.329871  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:10.369975  585602 cri.go:89] found id: ""
	I1205 20:33:10.370016  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.370028  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:10.370036  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:10.370104  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:10.408146  585602 cri.go:89] found id: ""
	I1205 20:33:10.408183  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.408196  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:10.408204  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:10.408288  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:10.443803  585602 cri.go:89] found id: ""
	I1205 20:33:10.443839  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.443850  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:10.443858  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:10.443932  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:10.481784  585602 cri.go:89] found id: ""
	I1205 20:33:10.481826  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.481840  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:10.481854  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:10.481872  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:10.531449  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:10.531498  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:10.549258  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:10.549288  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:10.620162  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:10.620189  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:10.620206  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:10.704656  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:10.704706  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:09.663940  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:12.163534  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:13.043720  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:15.542736  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:12.118781  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:14.619996  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:13.251518  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:13.264731  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:13.264815  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:13.297816  585602 cri.go:89] found id: ""
	I1205 20:33:13.297846  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.297855  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:13.297861  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:13.297918  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:13.330696  585602 cri.go:89] found id: ""
	I1205 20:33:13.330724  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.330732  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:13.330738  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:13.330789  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:13.366257  585602 cri.go:89] found id: ""
	I1205 20:33:13.366304  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.366315  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:13.366321  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:13.366385  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:13.403994  585602 cri.go:89] found id: ""
	I1205 20:33:13.404030  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.404042  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:13.404051  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:13.404121  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:13.450160  585602 cri.go:89] found id: ""
	I1205 20:33:13.450189  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.450198  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:13.450205  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:13.450262  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:13.502593  585602 cri.go:89] found id: ""
	I1205 20:33:13.502629  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.502640  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:13.502650  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:13.502720  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:13.548051  585602 cri.go:89] found id: ""
	I1205 20:33:13.548084  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.548095  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:13.548103  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:13.548166  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:13.593913  585602 cri.go:89] found id: ""
	I1205 20:33:13.593947  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.593960  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:13.593975  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:13.593997  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:13.674597  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:13.674628  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:13.674647  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:13.760747  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:13.760796  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:13.804351  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:13.804383  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:13.856896  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:13.856958  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:16.372754  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:16.387165  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:16.387242  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:16.426612  585602 cri.go:89] found id: ""
	I1205 20:33:16.426655  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.426668  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:16.426676  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:16.426734  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:16.461936  585602 cri.go:89] found id: ""
	I1205 20:33:16.461974  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.461988  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:16.461997  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:16.462060  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:16.498010  585602 cri.go:89] found id: ""
	I1205 20:33:16.498044  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.498062  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:16.498069  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:16.498133  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:16.533825  585602 cri.go:89] found id: ""
	I1205 20:33:16.533854  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.533863  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:16.533869  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:16.533941  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:16.570834  585602 cri.go:89] found id: ""
	I1205 20:33:16.570875  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.570887  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:16.570896  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:16.570968  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:16.605988  585602 cri.go:89] found id: ""
	I1205 20:33:16.606026  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.606038  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:16.606047  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:16.606140  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:16.645148  585602 cri.go:89] found id: ""
	I1205 20:33:16.645178  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.645188  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:16.645195  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:16.645261  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:16.682449  585602 cri.go:89] found id: ""
	I1205 20:33:16.682479  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.682491  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:16.682502  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:16.682519  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:16.696944  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:16.696980  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:16.777034  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:16.777064  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:16.777078  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:14.164550  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:16.664527  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:17.543278  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:19.543404  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:16.621517  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:18.626303  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:16.854812  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:16.854880  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:16.905101  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:16.905131  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:19.463427  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:19.477135  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:19.477233  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:19.529213  585602 cri.go:89] found id: ""
	I1205 20:33:19.529248  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.529264  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:19.529274  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:19.529359  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:19.575419  585602 cri.go:89] found id: ""
	I1205 20:33:19.575453  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.575465  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:19.575474  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:19.575546  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:19.616657  585602 cri.go:89] found id: ""
	I1205 20:33:19.616691  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.616704  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:19.616713  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:19.616787  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:19.653142  585602 cri.go:89] found id: ""
	I1205 20:33:19.653177  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.653189  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:19.653198  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:19.653267  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:19.690504  585602 cri.go:89] found id: ""
	I1205 20:33:19.690544  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.690555  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:19.690563  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:19.690635  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:19.730202  585602 cri.go:89] found id: ""
	I1205 20:33:19.730229  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.730237  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:19.730245  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:19.730302  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:19.767212  585602 cri.go:89] found id: ""
	I1205 20:33:19.767243  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.767255  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:19.767264  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:19.767336  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:19.803089  585602 cri.go:89] found id: ""
	I1205 20:33:19.803125  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.803137  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:19.803163  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:19.803180  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:19.884542  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:19.884589  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:19.925257  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:19.925303  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:19.980457  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:19.980510  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:19.997026  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:19.997057  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:20.075062  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:18.664915  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:21.163064  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:22.042272  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:24.043822  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:21.120054  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:23.120944  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:25.618857  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:22.575469  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:22.588686  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:22.588768  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:22.622824  585602 cri.go:89] found id: ""
	I1205 20:33:22.622860  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.622868  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:22.622874  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:22.622931  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:22.659964  585602 cri.go:89] found id: ""
	I1205 20:33:22.660059  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.660074  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:22.660085  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:22.660153  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:22.695289  585602 cri.go:89] found id: ""
	I1205 20:33:22.695325  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.695337  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:22.695345  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:22.695417  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:22.734766  585602 cri.go:89] found id: ""
	I1205 20:33:22.734801  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.734813  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:22.734821  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:22.734896  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:22.773778  585602 cri.go:89] found id: ""
	I1205 20:33:22.773806  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.773818  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:22.773826  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:22.773899  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:22.811468  585602 cri.go:89] found id: ""
	I1205 20:33:22.811503  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.811514  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:22.811521  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:22.811591  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:22.852153  585602 cri.go:89] found id: ""
	I1205 20:33:22.852210  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.852221  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:22.852227  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:22.852318  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:22.888091  585602 cri.go:89] found id: ""
	I1205 20:33:22.888120  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.888129  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:22.888139  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:22.888155  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:22.943210  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:22.943252  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:22.958356  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:22.958393  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:23.026732  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:23.026770  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:23.026788  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:23.106356  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:23.106395  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:25.650832  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:25.665392  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:25.665475  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:25.701109  585602 cri.go:89] found id: ""
	I1205 20:33:25.701146  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.701155  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:25.701162  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:25.701231  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:25.738075  585602 cri.go:89] found id: ""
	I1205 20:33:25.738108  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.738117  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:25.738123  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:25.738176  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:25.775031  585602 cri.go:89] found id: ""
	I1205 20:33:25.775078  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.775090  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:25.775100  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:25.775173  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:25.811343  585602 cri.go:89] found id: ""
	I1205 20:33:25.811376  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.811386  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:25.811395  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:25.811471  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:25.846635  585602 cri.go:89] found id: ""
	I1205 20:33:25.846674  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.846684  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:25.846692  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:25.846766  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:25.881103  585602 cri.go:89] found id: ""
	I1205 20:33:25.881136  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.881145  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:25.881151  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:25.881224  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:25.917809  585602 cri.go:89] found id: ""
	I1205 20:33:25.917844  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.917855  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:25.917864  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:25.917936  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:25.955219  585602 cri.go:89] found id: ""
	I1205 20:33:25.955245  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.955254  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:25.955264  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:25.955276  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:26.007016  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:26.007059  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:26.021554  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:26.021601  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:26.099290  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:26.099321  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:26.099334  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:26.182955  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:26.182993  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:23.164876  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:25.665151  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:26.542519  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:28.542856  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:30.542941  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:27.621687  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:30.119140  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:28.725201  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:28.739515  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:28.739602  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:28.778187  585602 cri.go:89] found id: ""
	I1205 20:33:28.778230  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.778242  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:28.778249  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:28.778315  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:28.815788  585602 cri.go:89] found id: ""
	I1205 20:33:28.815826  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.815838  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:28.815845  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:28.815912  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:28.852222  585602 cri.go:89] found id: ""
	I1205 20:33:28.852251  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.852261  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:28.852289  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:28.852362  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:28.889742  585602 cri.go:89] found id: ""
	I1205 20:33:28.889776  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.889787  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:28.889794  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:28.889859  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:28.926872  585602 cri.go:89] found id: ""
	I1205 20:33:28.926903  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.926912  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:28.926919  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:28.926972  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:28.963380  585602 cri.go:89] found id: ""
	I1205 20:33:28.963418  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.963432  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:28.963441  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:28.963509  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:29.000711  585602 cri.go:89] found id: ""
	I1205 20:33:29.000746  585602 logs.go:282] 0 containers: []
	W1205 20:33:29.000764  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:29.000772  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:29.000848  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:29.035934  585602 cri.go:89] found id: ""
	I1205 20:33:29.035963  585602 logs.go:282] 0 containers: []
	W1205 20:33:29.035974  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:29.035987  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:29.036003  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:29.091336  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:29.091382  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:29.105784  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:29.105814  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:29.182038  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:29.182078  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:29.182095  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:29.261107  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:29.261153  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:31.802911  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:31.817285  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:31.817369  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:28.164470  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:30.664154  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:33.043654  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:35.044730  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:32.120759  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:34.619618  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:31.854865  585602 cri.go:89] found id: ""
	I1205 20:33:31.854900  585602 logs.go:282] 0 containers: []
	W1205 20:33:31.854914  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:31.854922  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:31.854995  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:31.893928  585602 cri.go:89] found id: ""
	I1205 20:33:31.893964  585602 logs.go:282] 0 containers: []
	W1205 20:33:31.893977  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:31.893984  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:31.894053  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:31.929490  585602 cri.go:89] found id: ""
	I1205 20:33:31.929527  585602 logs.go:282] 0 containers: []
	W1205 20:33:31.929540  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:31.929548  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:31.929637  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:31.964185  585602 cri.go:89] found id: ""
	I1205 20:33:31.964211  585602 logs.go:282] 0 containers: []
	W1205 20:33:31.964219  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:31.964225  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:31.964291  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:32.002708  585602 cri.go:89] found id: ""
	I1205 20:33:32.002748  585602 logs.go:282] 0 containers: []
	W1205 20:33:32.002760  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:32.002768  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:32.002847  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:32.040619  585602 cri.go:89] found id: ""
	I1205 20:33:32.040712  585602 logs.go:282] 0 containers: []
	W1205 20:33:32.040740  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:32.040758  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:32.040839  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:32.079352  585602 cri.go:89] found id: ""
	I1205 20:33:32.079390  585602 logs.go:282] 0 containers: []
	W1205 20:33:32.079404  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:32.079412  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:32.079484  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:32.117560  585602 cri.go:89] found id: ""
	I1205 20:33:32.117596  585602 logs.go:282] 0 containers: []
	W1205 20:33:32.117608  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:32.117629  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:32.117653  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:32.172639  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:32.172686  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:32.187687  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:32.187727  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:32.265000  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:32.265034  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:32.265051  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:32.348128  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:32.348176  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:34.890144  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:34.903953  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:34.904032  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:34.939343  585602 cri.go:89] found id: ""
	I1205 20:33:34.939374  585602 logs.go:282] 0 containers: []
	W1205 20:33:34.939383  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:34.939389  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:34.939444  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:34.978225  585602 cri.go:89] found id: ""
	I1205 20:33:34.978266  585602 logs.go:282] 0 containers: []
	W1205 20:33:34.978278  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:34.978286  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:34.978363  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:35.015918  585602 cri.go:89] found id: ""
	I1205 20:33:35.015950  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.015960  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:35.015966  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:35.016032  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:35.053222  585602 cri.go:89] found id: ""
	I1205 20:33:35.053249  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.053257  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:35.053264  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:35.053320  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:35.088369  585602 cri.go:89] found id: ""
	I1205 20:33:35.088401  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.088412  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:35.088421  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:35.088498  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:35.135290  585602 cri.go:89] found id: ""
	I1205 20:33:35.135327  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.135338  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:35.135346  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:35.135412  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:35.174959  585602 cri.go:89] found id: ""
	I1205 20:33:35.174996  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.175008  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:35.175017  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:35.175097  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:35.215101  585602 cri.go:89] found id: ""
	I1205 20:33:35.215134  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.215143  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:35.215152  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:35.215167  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:35.269372  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:35.269414  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:35.285745  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:35.285776  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:35.364774  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:35.364807  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:35.364824  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:35.445932  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:35.445980  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:33.163790  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:35.163966  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:37.164819  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:37.047128  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:39.543051  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:36.620450  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:39.120055  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:37.996837  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:38.010545  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:38.010612  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:38.048292  585602 cri.go:89] found id: ""
	I1205 20:33:38.048334  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.048350  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:38.048360  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:38.048429  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:38.086877  585602 cri.go:89] found id: ""
	I1205 20:33:38.086911  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.086921  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:38.086927  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:38.087001  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:38.122968  585602 cri.go:89] found id: ""
	I1205 20:33:38.122999  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.123010  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:38.123018  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:38.123082  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:38.164901  585602 cri.go:89] found id: ""
	I1205 20:33:38.164940  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.164949  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:38.164955  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:38.165006  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:38.200697  585602 cri.go:89] found id: ""
	I1205 20:33:38.200725  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.200734  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:38.200740  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:38.200803  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:38.240306  585602 cri.go:89] found id: ""
	I1205 20:33:38.240338  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.240347  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:38.240354  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:38.240424  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:38.275788  585602 cri.go:89] found id: ""
	I1205 20:33:38.275823  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.275835  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:38.275844  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:38.275917  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:38.311431  585602 cri.go:89] found id: ""
	I1205 20:33:38.311468  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.311480  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:38.311493  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:38.311507  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:38.361472  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:38.361515  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:38.375970  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:38.376004  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:38.450913  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:38.450941  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:38.450961  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:38.527620  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:38.527666  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:41.072438  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:41.086085  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:41.086168  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:41.123822  585602 cri.go:89] found id: ""
	I1205 20:33:41.123852  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.123861  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:41.123868  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:41.123919  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:41.160343  585602 cri.go:89] found id: ""
	I1205 20:33:41.160371  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.160380  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:41.160389  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:41.160457  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:41.198212  585602 cri.go:89] found id: ""
	I1205 20:33:41.198240  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.198249  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:41.198255  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:41.198309  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:41.233793  585602 cri.go:89] found id: ""
	I1205 20:33:41.233824  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.233832  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:41.233838  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:41.233890  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:41.269397  585602 cri.go:89] found id: ""
	I1205 20:33:41.269435  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.269447  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:41.269457  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:41.269529  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:41.303079  585602 cri.go:89] found id: ""
	I1205 20:33:41.303116  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.303128  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:41.303136  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:41.303196  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:41.337784  585602 cri.go:89] found id: ""
	I1205 20:33:41.337817  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.337826  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:41.337832  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:41.337901  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:41.371410  585602 cri.go:89] found id: ""
	I1205 20:33:41.371438  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.371446  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:41.371456  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:41.371467  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:41.422768  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:41.422807  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:41.437427  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:41.437461  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:41.510875  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:41.510898  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:41.510915  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:41.590783  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:41.590826  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:39.667344  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:42.172287  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:42.043022  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:44.543222  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:41.120670  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:43.622132  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:45.623483  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:44.136390  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:44.149935  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:44.150006  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:44.187807  585602 cri.go:89] found id: ""
	I1205 20:33:44.187846  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.187858  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:44.187866  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:44.187933  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:44.224937  585602 cri.go:89] found id: ""
	I1205 20:33:44.224965  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.224973  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:44.224978  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:44.225040  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:44.260230  585602 cri.go:89] found id: ""
	I1205 20:33:44.260274  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.260287  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:44.260297  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:44.260439  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:44.296410  585602 cri.go:89] found id: ""
	I1205 20:33:44.296439  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.296449  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:44.296455  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:44.296507  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:44.332574  585602 cri.go:89] found id: ""
	I1205 20:33:44.332623  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.332635  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:44.332642  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:44.332709  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:44.368925  585602 cri.go:89] found id: ""
	I1205 20:33:44.368973  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.368985  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:44.368994  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:44.369068  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:44.410041  585602 cri.go:89] found id: ""
	I1205 20:33:44.410075  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.410088  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:44.410095  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:44.410165  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:44.454254  585602 cri.go:89] found id: ""
	I1205 20:33:44.454295  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.454316  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:44.454330  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:44.454346  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:44.507604  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:44.507669  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:44.525172  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:44.525219  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:44.599417  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:44.599446  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:44.599465  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:44.681624  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:44.681685  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:44.664942  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:47.163452  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:47.043225  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:49.044675  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:48.120302  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:50.120568  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:47.230092  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:47.243979  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:47.244076  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:47.280346  585602 cri.go:89] found id: ""
	I1205 20:33:47.280376  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.280385  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:47.280392  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:47.280448  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:47.316454  585602 cri.go:89] found id: ""
	I1205 20:33:47.316479  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.316487  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:47.316493  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:47.316546  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:47.353339  585602 cri.go:89] found id: ""
	I1205 20:33:47.353374  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.353386  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:47.353395  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:47.353466  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:47.388256  585602 cri.go:89] found id: ""
	I1205 20:33:47.388319  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.388330  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:47.388339  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:47.388408  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:47.424907  585602 cri.go:89] found id: ""
	I1205 20:33:47.424942  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.424953  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:47.424961  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:47.425035  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:47.461386  585602 cri.go:89] found id: ""
	I1205 20:33:47.461416  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.461425  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:47.461431  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:47.461485  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:47.501092  585602 cri.go:89] found id: ""
	I1205 20:33:47.501121  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.501130  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:47.501136  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:47.501189  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:47.559478  585602 cri.go:89] found id: ""
	I1205 20:33:47.559507  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.559520  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:47.559533  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:47.559551  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:47.609761  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:47.609800  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:47.626579  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:47.626606  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:47.713490  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:47.713520  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:47.713540  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:47.795346  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:47.795398  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:50.339441  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:50.353134  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:50.353216  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:50.393950  585602 cri.go:89] found id: ""
	I1205 20:33:50.393979  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.393990  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:50.394007  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:50.394074  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:50.431166  585602 cri.go:89] found id: ""
	I1205 20:33:50.431201  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.431212  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:50.431221  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:50.431291  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:50.472641  585602 cri.go:89] found id: ""
	I1205 20:33:50.472674  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.472684  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:50.472692  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:50.472763  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:50.512111  585602 cri.go:89] found id: ""
	I1205 20:33:50.512152  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.512165  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:50.512173  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:50.512247  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:50.554500  585602 cri.go:89] found id: ""
	I1205 20:33:50.554536  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.554549  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:50.554558  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:50.554625  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:50.590724  585602 cri.go:89] found id: ""
	I1205 20:33:50.590755  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.590764  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:50.590771  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:50.590837  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:50.628640  585602 cri.go:89] found id: ""
	I1205 20:33:50.628666  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.628675  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:50.628681  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:50.628732  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:50.670009  585602 cri.go:89] found id: ""
	I1205 20:33:50.670039  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.670047  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:50.670063  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:50.670075  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:50.684236  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:50.684290  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:50.757761  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:50.757790  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:50.757813  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:50.839665  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:50.839720  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:50.881087  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:50.881122  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:49.164986  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:51.665655  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:51.543286  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:53.543689  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:52.621297  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:54.621764  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:53.433345  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:53.446747  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:53.446819  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:53.482928  585602 cri.go:89] found id: ""
	I1205 20:33:53.482967  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.482979  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:53.482988  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:53.483048  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:53.519096  585602 cri.go:89] found id: ""
	I1205 20:33:53.519128  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.519136  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:53.519142  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:53.519196  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:53.556207  585602 cri.go:89] found id: ""
	I1205 20:33:53.556233  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.556243  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:53.556249  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:53.556346  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:53.589708  585602 cri.go:89] found id: ""
	I1205 20:33:53.589736  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.589745  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:53.589758  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:53.589813  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:53.630344  585602 cri.go:89] found id: ""
	I1205 20:33:53.630371  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.630380  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:53.630386  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:53.630438  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:53.668895  585602 cri.go:89] found id: ""
	I1205 20:33:53.668921  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.668929  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:53.668935  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:53.668987  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:53.706601  585602 cri.go:89] found id: ""
	I1205 20:33:53.706628  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.706638  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:53.706644  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:53.706704  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:53.744922  585602 cri.go:89] found id: ""
	I1205 20:33:53.744952  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.744960  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:53.744970  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:53.744989  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:53.823816  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:53.823853  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:53.823928  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:53.905075  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:53.905118  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:53.955424  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:53.955468  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:54.014871  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:54.014916  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:56.537142  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:56.550409  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:56.550478  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:56.587148  585602 cri.go:89] found id: ""
	I1205 20:33:56.587174  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.587184  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:56.587190  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:56.587249  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:56.625153  585602 cri.go:89] found id: ""
	I1205 20:33:56.625180  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.625188  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:56.625193  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:56.625243  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:56.671545  585602 cri.go:89] found id: ""
	I1205 20:33:56.671573  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.671582  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:56.671589  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:56.671652  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:56.712760  585602 cri.go:89] found id: ""
	I1205 20:33:56.712797  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.712810  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:56.712818  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:56.712890  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:56.751219  585602 cri.go:89] found id: ""
	I1205 20:33:56.751254  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.751266  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:56.751274  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:56.751340  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:56.787946  585602 cri.go:89] found id: ""
	I1205 20:33:56.787985  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.787998  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:56.788007  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:56.788101  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:56.823057  585602 cri.go:89] found id: ""
	I1205 20:33:56.823095  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.823108  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:56.823114  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:56.823170  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:54.164074  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:56.165063  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:56.043193  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:58.044158  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:00.542798  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:56.624407  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:59.119743  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:56.860358  585602 cri.go:89] found id: ""
	I1205 20:33:56.860396  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.860408  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:56.860421  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:56.860438  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:56.912954  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:56.912996  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:56.927642  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:56.927691  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:57.007316  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:57.007344  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:57.007359  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:57.091471  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:57.091522  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:59.642150  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:59.656240  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:59.656324  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:59.695918  585602 cri.go:89] found id: ""
	I1205 20:33:59.695954  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.695965  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:59.695973  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:59.696037  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:59.744218  585602 cri.go:89] found id: ""
	I1205 20:33:59.744250  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.744260  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:59.744278  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:59.744340  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:59.799035  585602 cri.go:89] found id: ""
	I1205 20:33:59.799081  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.799094  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:59.799102  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:59.799172  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:59.850464  585602 cri.go:89] found id: ""
	I1205 20:33:59.850505  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.850517  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:59.850526  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:59.850590  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:59.886441  585602 cri.go:89] found id: ""
	I1205 20:33:59.886477  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.886489  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:59.886497  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:59.886564  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:59.926689  585602 cri.go:89] found id: ""
	I1205 20:33:59.926728  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.926741  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:59.926751  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:59.926821  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:59.962615  585602 cri.go:89] found id: ""
	I1205 20:33:59.962644  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.962653  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:59.962659  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:59.962716  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:00.001852  585602 cri.go:89] found id: ""
	I1205 20:34:00.001878  585602 logs.go:282] 0 containers: []
	W1205 20:34:00.001886  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:00.001897  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:00.001913  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:00.055465  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:00.055508  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:00.071904  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:00.071941  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:00.151225  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:00.151248  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:00.151262  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:00.233869  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:00.233914  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:58.664773  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:00.664948  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:02.543019  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:04.543810  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:01.120136  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:03.120824  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:05.620283  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:02.776751  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:02.790868  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:02.790945  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:02.834686  585602 cri.go:89] found id: ""
	I1205 20:34:02.834719  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.834731  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:02.834740  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:02.834823  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:02.871280  585602 cri.go:89] found id: ""
	I1205 20:34:02.871313  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.871333  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:02.871342  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:02.871413  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:02.907300  585602 cri.go:89] found id: ""
	I1205 20:34:02.907336  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.907346  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:02.907352  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:02.907406  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:02.945453  585602 cri.go:89] found id: ""
	I1205 20:34:02.945487  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.945499  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:02.945511  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:02.945587  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:02.980528  585602 cri.go:89] found id: ""
	I1205 20:34:02.980561  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.980573  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:02.980580  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:02.980653  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:03.016919  585602 cri.go:89] found id: ""
	I1205 20:34:03.016946  585602 logs.go:282] 0 containers: []
	W1205 20:34:03.016955  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:03.016961  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:03.017012  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:03.053541  585602 cri.go:89] found id: ""
	I1205 20:34:03.053575  585602 logs.go:282] 0 containers: []
	W1205 20:34:03.053588  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:03.053596  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:03.053655  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:03.089907  585602 cri.go:89] found id: ""
	I1205 20:34:03.089946  585602 logs.go:282] 0 containers: []
	W1205 20:34:03.089959  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:03.089974  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:03.089991  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:03.144663  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:03.144700  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:03.160101  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:03.160140  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:03.231559  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:03.231583  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:03.231600  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:03.313226  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:03.313271  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:05.855538  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:05.869019  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:05.869120  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:05.906879  585602 cri.go:89] found id: ""
	I1205 20:34:05.906910  585602 logs.go:282] 0 containers: []
	W1205 20:34:05.906921  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:05.906928  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:05.906994  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:05.946846  585602 cri.go:89] found id: ""
	I1205 20:34:05.946881  585602 logs.go:282] 0 containers: []
	W1205 20:34:05.946893  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:05.946900  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:05.946968  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:05.984067  585602 cri.go:89] found id: ""
	I1205 20:34:05.984104  585602 logs.go:282] 0 containers: []
	W1205 20:34:05.984118  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:05.984127  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:05.984193  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:06.024984  585602 cri.go:89] found id: ""
	I1205 20:34:06.025014  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.025023  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:06.025029  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:06.025091  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:06.064766  585602 cri.go:89] found id: ""
	I1205 20:34:06.064794  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.064806  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:06.064821  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:06.064877  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:06.105652  585602 cri.go:89] found id: ""
	I1205 20:34:06.105683  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.105691  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:06.105698  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:06.105748  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:06.143732  585602 cri.go:89] found id: ""
	I1205 20:34:06.143762  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.143773  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:06.143781  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:06.143857  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:06.183397  585602 cri.go:89] found id: ""
	I1205 20:34:06.183429  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.183439  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:06.183449  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:06.183462  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:06.236403  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:06.236449  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:06.250728  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:06.250759  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:06.320983  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:06.321009  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:06.321025  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:06.408037  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:06.408084  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:03.164354  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:05.665345  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:07.044218  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:09.543580  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:08.119532  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:10.119918  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:08.955959  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:08.968956  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:08.969037  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:09.002804  585602 cri.go:89] found id: ""
	I1205 20:34:09.002846  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.002859  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:09.002866  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:09.002935  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:09.039098  585602 cri.go:89] found id: ""
	I1205 20:34:09.039191  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.039210  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:09.039220  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:09.039291  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:09.074727  585602 cri.go:89] found id: ""
	I1205 20:34:09.074764  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.074776  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:09.074792  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:09.074861  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:09.112650  585602 cri.go:89] found id: ""
	I1205 20:34:09.112682  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.112692  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:09.112698  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:09.112754  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:09.149301  585602 cri.go:89] found id: ""
	I1205 20:34:09.149346  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.149359  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:09.149368  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:09.149432  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:09.190288  585602 cri.go:89] found id: ""
	I1205 20:34:09.190317  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.190329  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:09.190338  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:09.190404  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:09.225311  585602 cri.go:89] found id: ""
	I1205 20:34:09.225348  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.225361  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:09.225369  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:09.225435  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:09.261023  585602 cri.go:89] found id: ""
	I1205 20:34:09.261052  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.261063  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:09.261075  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:09.261092  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:09.313733  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:09.313785  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:09.329567  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:09.329619  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:09.403397  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:09.403430  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:09.403447  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:09.486586  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:09.486630  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:08.163730  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:10.663603  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:12.665663  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:11.544538  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:14.042854  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:12.120629  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:14.621977  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:12.028110  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:12.041802  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:12.041866  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:12.080349  585602 cri.go:89] found id: ""
	I1205 20:34:12.080388  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.080402  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:12.080410  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:12.080475  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:12.121455  585602 cri.go:89] found id: ""
	I1205 20:34:12.121486  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.121499  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:12.121507  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:12.121567  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:12.157743  585602 cri.go:89] found id: ""
	I1205 20:34:12.157768  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.157785  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:12.157794  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:12.157855  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:12.196901  585602 cri.go:89] found id: ""
	I1205 20:34:12.196933  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.196946  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:12.196954  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:12.197024  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:12.234471  585602 cri.go:89] found id: ""
	I1205 20:34:12.234500  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.234508  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:12.234516  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:12.234585  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:12.269238  585602 cri.go:89] found id: ""
	I1205 20:34:12.269263  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.269271  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:12.269278  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:12.269340  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:12.307965  585602 cri.go:89] found id: ""
	I1205 20:34:12.308006  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.308016  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:12.308022  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:12.308081  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:12.343463  585602 cri.go:89] found id: ""
	I1205 20:34:12.343497  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.343510  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:12.343536  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:12.343574  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:12.393393  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:12.393437  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:12.407991  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:12.408025  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:12.477868  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:12.477910  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:12.477924  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:12.557274  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:12.557315  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:15.102587  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:15.115734  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:15.115808  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:15.153057  585602 cri.go:89] found id: ""
	I1205 20:34:15.153091  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.153105  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:15.153113  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:15.153182  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:15.192762  585602 cri.go:89] found id: ""
	I1205 20:34:15.192815  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.192825  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:15.192831  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:15.192887  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:15.231330  585602 cri.go:89] found id: ""
	I1205 20:34:15.231364  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.231374  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:15.231380  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:15.231435  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:15.265229  585602 cri.go:89] found id: ""
	I1205 20:34:15.265262  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.265271  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:15.265278  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:15.265350  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:15.299596  585602 cri.go:89] found id: ""
	I1205 20:34:15.299624  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.299634  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:15.299640  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:15.299699  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:15.336155  585602 cri.go:89] found id: ""
	I1205 20:34:15.336187  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.336195  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:15.336202  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:15.336256  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:15.371867  585602 cri.go:89] found id: ""
	I1205 20:34:15.371899  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.371909  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:15.371920  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:15.371976  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:15.408536  585602 cri.go:89] found id: ""
	I1205 20:34:15.408566  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.408580  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:15.408592  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:15.408609  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:15.422499  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:15.422538  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:15.495096  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:15.495131  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:15.495145  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:15.571411  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:15.571461  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:15.612284  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:15.612319  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:15.165343  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:17.165619  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:16.043962  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:18.542495  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:17.119936  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:19.622046  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:18.168869  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:18.184247  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:18.184370  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:18.226078  585602 cri.go:89] found id: ""
	I1205 20:34:18.226112  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.226124  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:18.226133  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:18.226202  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:18.266221  585602 cri.go:89] found id: ""
	I1205 20:34:18.266258  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.266270  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:18.266278  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:18.266349  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:18.305876  585602 cri.go:89] found id: ""
	I1205 20:34:18.305903  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.305912  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:18.305921  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:18.305971  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:18.342044  585602 cri.go:89] found id: ""
	I1205 20:34:18.342077  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.342089  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:18.342098  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:18.342160  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:18.380240  585602 cri.go:89] found id: ""
	I1205 20:34:18.380290  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.380301  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:18.380310  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:18.380372  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:18.416228  585602 cri.go:89] found id: ""
	I1205 20:34:18.416258  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.416301  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:18.416311  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:18.416380  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:18.453368  585602 cri.go:89] found id: ""
	I1205 20:34:18.453407  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.453420  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:18.453429  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:18.453513  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:18.491689  585602 cri.go:89] found id: ""
	I1205 20:34:18.491727  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.491739  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:18.491754  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:18.491779  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:18.546614  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:18.546652  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:18.560516  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:18.560547  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:18.637544  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:18.637568  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:18.637582  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:18.720410  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:18.720453  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:21.261494  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:21.276378  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:21.276473  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:21.317571  585602 cri.go:89] found id: ""
	I1205 20:34:21.317602  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.317610  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:21.317617  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:21.317670  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:21.355174  585602 cri.go:89] found id: ""
	I1205 20:34:21.355202  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.355210  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:21.355217  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:21.355277  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:21.393259  585602 cri.go:89] found id: ""
	I1205 20:34:21.393297  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.393310  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:21.393317  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:21.393408  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:21.432286  585602 cri.go:89] found id: ""
	I1205 20:34:21.432329  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.432341  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:21.432348  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:21.432415  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:21.469844  585602 cri.go:89] found id: ""
	I1205 20:34:21.469877  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.469888  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:21.469896  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:21.469964  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:21.508467  585602 cri.go:89] found id: ""
	I1205 20:34:21.508507  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.508519  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:21.508528  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:21.508592  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:21.553053  585602 cri.go:89] found id: ""
	I1205 20:34:21.553185  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.553208  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:21.553226  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:21.553317  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:21.590595  585602 cri.go:89] found id: ""
	I1205 20:34:21.590629  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.590640  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:21.590654  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:21.590672  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:21.649493  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:21.649546  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:21.666114  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:21.666147  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:21.742801  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:21.742828  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:21.742858  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:21.822949  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:21.823010  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:19.165951  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:21.664450  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:21.043233  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:23.043477  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:25.543490  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:22.119177  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:24.119685  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:24.366575  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:24.380894  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:24.380992  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:24.416907  585602 cri.go:89] found id: ""
	I1205 20:34:24.416943  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.416956  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:24.416965  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:24.417034  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:24.453303  585602 cri.go:89] found id: ""
	I1205 20:34:24.453337  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.453349  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:24.453358  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:24.453445  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:24.496795  585602 cri.go:89] found id: ""
	I1205 20:34:24.496825  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.496833  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:24.496839  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:24.496907  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:24.539105  585602 cri.go:89] found id: ""
	I1205 20:34:24.539142  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.539154  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:24.539162  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:24.539230  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:24.576778  585602 cri.go:89] found id: ""
	I1205 20:34:24.576808  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.576816  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:24.576822  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:24.576879  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:24.617240  585602 cri.go:89] found id: ""
	I1205 20:34:24.617271  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.617280  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:24.617293  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:24.617374  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:24.659274  585602 cri.go:89] found id: ""
	I1205 20:34:24.659316  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.659330  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:24.659342  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:24.659408  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:24.701047  585602 cri.go:89] found id: ""
	I1205 20:34:24.701092  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.701105  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:24.701121  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:24.701139  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:24.741070  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:24.741115  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:24.793364  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:24.793407  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:24.807803  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:24.807839  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:24.883194  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:24.883225  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:24.883243  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:24.163198  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:26.165402  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:27.544607  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:30.044244  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:26.619847  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:28.621467  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:30.621704  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:27.467460  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:27.483055  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:27.483129  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:27.523718  585602 cri.go:89] found id: ""
	I1205 20:34:27.523752  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.523763  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:27.523772  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:27.523841  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:27.562872  585602 cri.go:89] found id: ""
	I1205 20:34:27.562899  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.562908  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:27.562915  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:27.562976  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:27.601804  585602 cri.go:89] found id: ""
	I1205 20:34:27.601835  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.601845  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:27.601852  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:27.601916  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:27.640553  585602 cri.go:89] found id: ""
	I1205 20:34:27.640589  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.640599  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:27.640605  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:27.640672  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:27.680983  585602 cri.go:89] found id: ""
	I1205 20:34:27.681015  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.681027  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:27.681035  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:27.681105  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:27.720766  585602 cri.go:89] found id: ""
	I1205 20:34:27.720811  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.720821  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:27.720828  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:27.720886  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:27.761422  585602 cri.go:89] found id: ""
	I1205 20:34:27.761453  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.761466  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:27.761480  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:27.761550  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:27.799658  585602 cri.go:89] found id: ""
	I1205 20:34:27.799692  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.799705  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:27.799720  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:27.799736  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:27.851801  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:27.851845  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:27.865953  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:27.865984  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:27.941787  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:27.941824  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:27.941840  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:28.023556  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:28.023616  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:30.573267  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:30.586591  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:30.586679  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:30.629923  585602 cri.go:89] found id: ""
	I1205 20:34:30.629960  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.629974  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:30.629982  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:30.630048  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:30.667045  585602 cri.go:89] found id: ""
	I1205 20:34:30.667078  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.667090  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:30.667098  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:30.667167  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:30.704479  585602 cri.go:89] found id: ""
	I1205 20:34:30.704510  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.704522  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:30.704530  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:30.704620  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:30.746035  585602 cri.go:89] found id: ""
	I1205 20:34:30.746065  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.746077  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:30.746085  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:30.746161  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:30.784375  585602 cri.go:89] found id: ""
	I1205 20:34:30.784415  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.784425  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:30.784431  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:30.784487  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:30.821779  585602 cri.go:89] found id: ""
	I1205 20:34:30.821811  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.821822  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:30.821831  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:30.821905  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:30.856927  585602 cri.go:89] found id: ""
	I1205 20:34:30.856963  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.856976  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:30.856984  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:30.857088  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:30.895852  585602 cri.go:89] found id: ""
	I1205 20:34:30.895882  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.895894  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:30.895914  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:30.895930  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:30.947600  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:30.947642  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:30.962717  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:30.962753  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:31.049225  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:31.049262  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:31.049280  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:31.126806  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:31.126850  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:28.665006  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:31.164172  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:32.548634  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:35.042159  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:33.120370  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:35.621247  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:33.670844  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:33.685063  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:33.685160  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:33.718277  585602 cri.go:89] found id: ""
	I1205 20:34:33.718312  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.718321  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:33.718327  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:33.718378  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:33.755409  585602 cri.go:89] found id: ""
	I1205 20:34:33.755445  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.755456  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:33.755465  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:33.755542  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:33.809447  585602 cri.go:89] found id: ""
	I1205 20:34:33.809506  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.809519  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:33.809527  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:33.809599  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:33.848327  585602 cri.go:89] found id: ""
	I1205 20:34:33.848362  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.848376  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:33.848384  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:33.848444  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:33.887045  585602 cri.go:89] found id: ""
	I1205 20:34:33.887082  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.887094  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:33.887103  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:33.887178  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:33.924385  585602 cri.go:89] found id: ""
	I1205 20:34:33.924418  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.924427  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:33.924434  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:33.924499  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:33.960711  585602 cri.go:89] found id: ""
	I1205 20:34:33.960738  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.960747  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:33.960757  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:33.960808  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:33.998150  585602 cri.go:89] found id: ""
	I1205 20:34:33.998184  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.998193  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:33.998203  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:33.998215  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:34.041977  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:34.042006  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:34.095895  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:34.095940  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:34.109802  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:34.109836  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:34.185716  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:34.185740  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:34.185753  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:36.767768  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:36.782114  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:36.782201  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:36.820606  585602 cri.go:89] found id: ""
	I1205 20:34:36.820647  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.820659  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:36.820668  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:36.820736  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:33.164572  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:35.664069  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:37.043102  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:39.544667  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:38.120555  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:40.619948  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:36.858999  585602 cri.go:89] found id: ""
	I1205 20:34:36.859033  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.859044  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:36.859051  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:36.859117  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:36.896222  585602 cri.go:89] found id: ""
	I1205 20:34:36.896257  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.896282  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:36.896290  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:36.896352  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:36.935565  585602 cri.go:89] found id: ""
	I1205 20:34:36.935602  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.935612  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:36.935618  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:36.935671  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:36.974031  585602 cri.go:89] found id: ""
	I1205 20:34:36.974066  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.974079  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:36.974096  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:36.974166  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:37.018243  585602 cri.go:89] found id: ""
	I1205 20:34:37.018278  585602 logs.go:282] 0 containers: []
	W1205 20:34:37.018290  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:37.018300  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:37.018371  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:37.057715  585602 cri.go:89] found id: ""
	I1205 20:34:37.057742  585602 logs.go:282] 0 containers: []
	W1205 20:34:37.057750  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:37.057756  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:37.057806  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:37.099006  585602 cri.go:89] found id: ""
	I1205 20:34:37.099037  585602 logs.go:282] 0 containers: []
	W1205 20:34:37.099045  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:37.099055  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:37.099070  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:37.186218  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:37.186264  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:37.232921  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:37.232955  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:37.285539  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:37.285581  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:37.301115  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:37.301155  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:37.373249  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:39.873692  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:39.887772  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:39.887847  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:39.925558  585602 cri.go:89] found id: ""
	I1205 20:34:39.925595  585602 logs.go:282] 0 containers: []
	W1205 20:34:39.925607  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:39.925615  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:39.925684  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:39.964967  585602 cri.go:89] found id: ""
	I1205 20:34:39.964994  585602 logs.go:282] 0 containers: []
	W1205 20:34:39.965004  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:39.965011  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:39.965073  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:40.010875  585602 cri.go:89] found id: ""
	I1205 20:34:40.010911  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.010923  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:40.010930  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:40.011003  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:40.050940  585602 cri.go:89] found id: ""
	I1205 20:34:40.050970  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.050981  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:40.050990  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:40.051052  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:40.086157  585602 cri.go:89] found id: ""
	I1205 20:34:40.086197  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.086210  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:40.086219  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:40.086283  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:40.123280  585602 cri.go:89] found id: ""
	I1205 20:34:40.123321  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.123333  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:40.123344  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:40.123414  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:40.164755  585602 cri.go:89] found id: ""
	I1205 20:34:40.164784  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.164793  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:40.164800  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:40.164871  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:40.211566  585602 cri.go:89] found id: ""
	I1205 20:34:40.211595  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.211608  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:40.211621  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:40.211638  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:40.275269  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:40.275326  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:40.303724  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:40.303754  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:40.377315  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:40.377345  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:40.377360  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:40.457744  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:40.457794  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:38.163598  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:40.164173  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:42.663952  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:42.043947  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:44.542445  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:42.621824  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:45.120127  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:43.000390  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:43.015220  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:43.015308  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:43.051919  585602 cri.go:89] found id: ""
	I1205 20:34:43.051946  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.051955  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:43.051961  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:43.052034  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:43.088188  585602 cri.go:89] found id: ""
	I1205 20:34:43.088230  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.088241  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:43.088249  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:43.088350  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:43.125881  585602 cri.go:89] found id: ""
	I1205 20:34:43.125910  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.125922  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:43.125930  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:43.125988  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:43.166630  585602 cri.go:89] found id: ""
	I1205 20:34:43.166657  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.166674  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:43.166682  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:43.166744  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:43.206761  585602 cri.go:89] found id: ""
	I1205 20:34:43.206791  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.206803  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:43.206810  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:43.206873  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:43.242989  585602 cri.go:89] found id: ""
	I1205 20:34:43.243017  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.243026  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:43.243033  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:43.243094  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:43.281179  585602 cri.go:89] found id: ""
	I1205 20:34:43.281208  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.281217  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:43.281223  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:43.281272  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:43.317283  585602 cri.go:89] found id: ""
	I1205 20:34:43.317314  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.317326  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:43.317347  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:43.317362  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:43.369262  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:43.369303  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:43.386137  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:43.386182  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:43.458532  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:43.458553  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:43.458566  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:43.538254  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:43.538296  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:46.083593  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:46.101024  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:46.101133  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:46.169786  585602 cri.go:89] found id: ""
	I1205 20:34:46.169817  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.169829  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:46.169838  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:46.169905  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:46.218647  585602 cri.go:89] found id: ""
	I1205 20:34:46.218689  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.218704  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:46.218713  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:46.218790  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:46.262718  585602 cri.go:89] found id: ""
	I1205 20:34:46.262749  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.262758  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:46.262764  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:46.262846  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:46.301606  585602 cri.go:89] found id: ""
	I1205 20:34:46.301638  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.301649  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:46.301656  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:46.301714  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:46.337313  585602 cri.go:89] found id: ""
	I1205 20:34:46.337347  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.337356  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:46.337362  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:46.337422  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:46.380171  585602 cri.go:89] found id: ""
	I1205 20:34:46.380201  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.380209  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:46.380215  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:46.380288  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:46.423054  585602 cri.go:89] found id: ""
	I1205 20:34:46.423089  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.423101  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:46.423109  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:46.423178  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:46.467615  585602 cri.go:89] found id: ""
	I1205 20:34:46.467647  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.467659  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:46.467673  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:46.467687  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:46.522529  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:46.522579  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:46.537146  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:46.537199  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:46.609585  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:46.609618  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:46.609637  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:46.696093  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:46.696152  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:45.164249  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:47.664159  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:46.547883  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:49.043793  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:47.623375  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:50.122680  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:49.238735  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:49.256406  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:49.256484  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:49.294416  585602 cri.go:89] found id: ""
	I1205 20:34:49.294449  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.294458  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:49.294467  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:49.294528  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:49.334235  585602 cri.go:89] found id: ""
	I1205 20:34:49.334268  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.334282  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:49.334290  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:49.334362  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:49.372560  585602 cri.go:89] found id: ""
	I1205 20:34:49.372637  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.372662  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:49.372674  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:49.372756  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:49.413779  585602 cri.go:89] found id: ""
	I1205 20:34:49.413813  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.413822  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:49.413829  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:49.413900  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:49.449513  585602 cri.go:89] found id: ""
	I1205 20:34:49.449543  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.449553  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:49.449560  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:49.449630  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:49.488923  585602 cri.go:89] found id: ""
	I1205 20:34:49.488961  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.488973  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:49.488982  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:49.489050  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:49.524922  585602 cri.go:89] found id: ""
	I1205 20:34:49.524959  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.524971  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:49.524980  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:49.525048  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:49.565700  585602 cri.go:89] found id: ""
	I1205 20:34:49.565735  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.565745  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:49.565756  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:49.565769  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:49.624297  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:49.624339  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:49.641424  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:49.641465  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:49.721474  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:49.721504  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:49.721517  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:49.810777  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:49.810822  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:49.664998  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:52.163337  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:51.543015  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:54.045218  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:52.621649  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:55.120035  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:52.354661  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:52.368481  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:52.368555  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:52.407081  585602 cri.go:89] found id: ""
	I1205 20:34:52.407110  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.407118  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:52.407125  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:52.407189  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:52.444462  585602 cri.go:89] found id: ""
	I1205 20:34:52.444489  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.444498  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:52.444505  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:52.444562  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:52.483546  585602 cri.go:89] found id: ""
	I1205 20:34:52.483573  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.483582  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:52.483595  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:52.483648  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:52.526529  585602 cri.go:89] found id: ""
	I1205 20:34:52.526567  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.526579  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:52.526587  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:52.526655  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:52.564875  585602 cri.go:89] found id: ""
	I1205 20:34:52.564904  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.564913  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:52.564919  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:52.564984  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:52.599367  585602 cri.go:89] found id: ""
	I1205 20:34:52.599397  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.599410  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:52.599419  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:52.599475  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:52.638192  585602 cri.go:89] found id: ""
	I1205 20:34:52.638233  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.638247  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:52.638255  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:52.638336  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:52.675227  585602 cri.go:89] found id: ""
	I1205 20:34:52.675264  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.675275  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:52.675287  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:52.675311  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:52.716538  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:52.716582  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:52.772121  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:52.772162  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:52.787598  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:52.787632  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:52.865380  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:52.865408  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:52.865422  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:55.449288  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:55.462386  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:55.462474  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:55.498350  585602 cri.go:89] found id: ""
	I1205 20:34:55.498382  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.498391  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:55.498397  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:55.498457  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:55.540878  585602 cri.go:89] found id: ""
	I1205 20:34:55.540915  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.540929  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:55.540939  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:55.541022  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:55.577248  585602 cri.go:89] found id: ""
	I1205 20:34:55.577277  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.577288  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:55.577294  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:55.577375  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:55.615258  585602 cri.go:89] found id: ""
	I1205 20:34:55.615287  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.615308  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:55.615316  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:55.615384  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:55.652102  585602 cri.go:89] found id: ""
	I1205 20:34:55.652136  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.652147  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:55.652157  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:55.652228  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:55.689353  585602 cri.go:89] found id: ""
	I1205 20:34:55.689387  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.689399  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:55.689408  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:55.689486  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:55.727603  585602 cri.go:89] found id: ""
	I1205 20:34:55.727634  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.727648  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:55.727657  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:55.727729  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:55.765103  585602 cri.go:89] found id: ""
	I1205 20:34:55.765134  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.765143  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:55.765156  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:55.765169  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:55.823878  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:55.823923  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:55.838966  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:55.839001  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:55.909385  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:55.909412  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:55.909424  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:55.992036  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:55.992080  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:54.165488  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:56.166030  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:56.542663  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:58.543260  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:57.120140  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:59.621190  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:58.537231  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:58.552307  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:58.552392  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:58.589150  585602 cri.go:89] found id: ""
	I1205 20:34:58.589184  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.589200  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:58.589206  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:58.589272  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:58.630344  585602 cri.go:89] found id: ""
	I1205 20:34:58.630370  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.630378  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:58.630385  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:58.630452  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:58.669953  585602 cri.go:89] found id: ""
	I1205 20:34:58.669981  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.669991  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:58.669999  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:58.670055  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:58.708532  585602 cri.go:89] found id: ""
	I1205 20:34:58.708562  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.708570  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:58.708577  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:58.708631  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:58.745944  585602 cri.go:89] found id: ""
	I1205 20:34:58.745975  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.745986  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:58.745994  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:58.746051  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:58.787177  585602 cri.go:89] found id: ""
	I1205 20:34:58.787206  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.787214  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:58.787221  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:58.787272  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:58.822084  585602 cri.go:89] found id: ""
	I1205 20:34:58.822123  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.822134  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:58.822142  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:58.822210  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:58.858608  585602 cri.go:89] found id: ""
	I1205 20:34:58.858645  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.858657  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:58.858670  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:58.858691  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:58.873289  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:58.873322  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:58.947855  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:58.947884  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:58.947900  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:59.028348  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:59.028397  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:59.069172  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:59.069206  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:01.623309  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:01.637362  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:01.637449  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:01.678867  585602 cri.go:89] found id: ""
	I1205 20:35:01.678907  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.678919  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:01.678928  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:01.679001  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:01.715333  585602 cri.go:89] found id: ""
	I1205 20:35:01.715364  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.715372  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:01.715379  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:01.715439  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:01.754247  585602 cri.go:89] found id: ""
	I1205 20:35:01.754277  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.754286  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:01.754292  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:01.754348  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:01.791922  585602 cri.go:89] found id: ""
	I1205 20:35:01.791957  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.791968  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:01.791977  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:01.792045  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:01.827261  585602 cri.go:89] found id: ""
	I1205 20:35:01.827294  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.827307  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:01.827315  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:01.827389  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:58.665248  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:01.163431  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:01.043056  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:03.543015  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:02.122540  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:04.620544  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:01.864205  585602 cri.go:89] found id: ""
	I1205 20:35:01.864234  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.864243  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:01.864249  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:01.864332  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:01.902740  585602 cri.go:89] found id: ""
	I1205 20:35:01.902773  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.902783  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:01.902789  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:01.902857  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:01.941627  585602 cri.go:89] found id: ""
	I1205 20:35:01.941657  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.941666  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:01.941677  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:01.941690  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:01.995743  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:01.995791  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:02.010327  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:02.010368  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:02.086879  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:02.086907  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:02.086921  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:02.166500  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:02.166538  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:04.716638  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:04.730922  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:04.730992  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:04.768492  585602 cri.go:89] found id: ""
	I1205 20:35:04.768524  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.768534  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:04.768540  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:04.768606  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:04.803740  585602 cri.go:89] found id: ""
	I1205 20:35:04.803776  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.803789  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:04.803797  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:04.803866  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:04.840907  585602 cri.go:89] found id: ""
	I1205 20:35:04.840947  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.840960  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:04.840968  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:04.841036  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:04.875901  585602 cri.go:89] found id: ""
	I1205 20:35:04.875933  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.875943  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:04.875949  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:04.876003  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:04.913581  585602 cri.go:89] found id: ""
	I1205 20:35:04.913617  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.913627  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:04.913634  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:04.913689  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:04.952460  585602 cri.go:89] found id: ""
	I1205 20:35:04.952504  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.952519  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:04.952528  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:04.952617  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:04.989939  585602 cri.go:89] found id: ""
	I1205 20:35:04.989968  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.989979  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:04.989985  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:04.990041  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:05.025017  585602 cri.go:89] found id: ""
	I1205 20:35:05.025052  585602 logs.go:282] 0 containers: []
	W1205 20:35:05.025066  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:05.025078  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:05.025094  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:05.068179  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:05.068223  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:05.127311  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:05.127369  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:05.141092  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:05.141129  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:05.217648  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:05.217678  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:05.217691  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:03.163987  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:05.164131  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:07.165804  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:06.043765  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:08.036400  585113 pod_ready.go:82] duration metric: took 4m0.000157493s for pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace to be "Ready" ...
	E1205 20:35:08.036457  585113 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace to be "Ready" (will not retry!)
	I1205 20:35:08.036489  585113 pod_ready.go:39] duration metric: took 4m11.05050249s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:35:08.036554  585113 kubeadm.go:597] duration metric: took 4m18.178903617s to restartPrimaryControlPlane
	W1205 20:35:08.036733  585113 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 20:35:08.036784  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:35:06.621887  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:09.119692  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:07.793457  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:07.808710  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:07.808778  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:07.846331  585602 cri.go:89] found id: ""
	I1205 20:35:07.846366  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.846380  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:07.846389  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:07.846462  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:07.881185  585602 cri.go:89] found id: ""
	I1205 20:35:07.881222  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.881236  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:07.881243  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:07.881307  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:07.918463  585602 cri.go:89] found id: ""
	I1205 20:35:07.918501  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.918514  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:07.918522  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:07.918589  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:07.956329  585602 cri.go:89] found id: ""
	I1205 20:35:07.956364  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.956375  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:07.956385  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:07.956456  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:07.992173  585602 cri.go:89] found id: ""
	I1205 20:35:07.992212  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.992222  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:07.992229  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:07.992318  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:08.030183  585602 cri.go:89] found id: ""
	I1205 20:35:08.030214  585602 logs.go:282] 0 containers: []
	W1205 20:35:08.030226  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:08.030235  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:08.030309  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:08.072320  585602 cri.go:89] found id: ""
	I1205 20:35:08.072362  585602 logs.go:282] 0 containers: []
	W1205 20:35:08.072374  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:08.072382  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:08.072452  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:08.124220  585602 cri.go:89] found id: ""
	I1205 20:35:08.124253  585602 logs.go:282] 0 containers: []
	W1205 20:35:08.124277  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:08.124292  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:08.124310  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:08.171023  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:08.171057  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:08.237645  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:08.237699  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:08.252708  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:08.252744  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:08.343107  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:08.343140  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:08.343158  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:10.919646  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:10.934494  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:10.934562  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:10.971816  585602 cri.go:89] found id: ""
	I1205 20:35:10.971855  585602 logs.go:282] 0 containers: []
	W1205 20:35:10.971868  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:10.971878  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:10.971950  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:11.010031  585602 cri.go:89] found id: ""
	I1205 20:35:11.010071  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.010084  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:11.010095  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:11.010170  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:11.046520  585602 cri.go:89] found id: ""
	I1205 20:35:11.046552  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.046561  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:11.046568  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:11.046632  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:11.081385  585602 cri.go:89] found id: ""
	I1205 20:35:11.081426  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.081440  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:11.081448  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:11.081522  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:11.122529  585602 cri.go:89] found id: ""
	I1205 20:35:11.122559  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.122568  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:11.122576  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:11.122656  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:11.161684  585602 cri.go:89] found id: ""
	I1205 20:35:11.161767  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.161788  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:11.161797  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:11.161862  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:11.199796  585602 cri.go:89] found id: ""
	I1205 20:35:11.199824  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.199833  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:11.199842  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:11.199916  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:11.235580  585602 cri.go:89] found id: ""
	I1205 20:35:11.235617  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.235625  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:11.235635  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:11.235647  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:11.291005  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:11.291055  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:11.305902  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:11.305947  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:11.375862  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:11.375894  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:11.375915  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:11.456701  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:11.456746  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:09.663952  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:11.664200  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:11.119954  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:13.120903  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:15.622247  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:14.006509  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:14.020437  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:14.020531  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:14.056878  585602 cri.go:89] found id: ""
	I1205 20:35:14.056905  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.056915  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:14.056923  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:14.056993  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:14.091747  585602 cri.go:89] found id: ""
	I1205 20:35:14.091782  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.091792  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:14.091800  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:14.091860  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:14.131409  585602 cri.go:89] found id: ""
	I1205 20:35:14.131440  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.131453  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:14.131461  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:14.131532  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:14.170726  585602 cri.go:89] found id: ""
	I1205 20:35:14.170754  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.170765  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:14.170773  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:14.170851  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:14.208619  585602 cri.go:89] found id: ""
	I1205 20:35:14.208654  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.208666  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:14.208674  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:14.208747  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:14.247734  585602 cri.go:89] found id: ""
	I1205 20:35:14.247771  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.247784  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:14.247793  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:14.247855  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:14.296090  585602 cri.go:89] found id: ""
	I1205 20:35:14.296119  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.296129  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:14.296136  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:14.296205  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:14.331009  585602 cri.go:89] found id: ""
	I1205 20:35:14.331037  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.331045  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:14.331057  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:14.331070  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:14.384877  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:14.384935  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:14.400458  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:14.400507  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:14.475745  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:14.475774  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:14.475787  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:14.553150  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:14.553192  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:14.164516  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:16.165316  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:18.119418  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:20.120499  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:17.095700  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:17.109135  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:17.109215  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:17.146805  585602 cri.go:89] found id: ""
	I1205 20:35:17.146838  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.146851  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:17.146861  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:17.146919  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:17.186861  585602 cri.go:89] found id: ""
	I1205 20:35:17.186891  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.186901  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:17.186907  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:17.186960  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:17.223113  585602 cri.go:89] found id: ""
	I1205 20:35:17.223148  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.223159  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:17.223166  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:17.223238  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:17.263066  585602 cri.go:89] found id: ""
	I1205 20:35:17.263098  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.263110  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:17.263118  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:17.263187  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:17.300113  585602 cri.go:89] found id: ""
	I1205 20:35:17.300153  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.300167  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:17.300175  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:17.300237  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:17.339135  585602 cri.go:89] found id: ""
	I1205 20:35:17.339172  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.339184  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:17.339193  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:17.339260  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:17.376200  585602 cri.go:89] found id: ""
	I1205 20:35:17.376229  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.376239  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:17.376248  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:17.376354  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:17.411852  585602 cri.go:89] found id: ""
	I1205 20:35:17.411895  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.411906  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:17.411919  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:17.411948  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:17.463690  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:17.463729  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:17.478912  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:17.478946  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:17.552874  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:17.552907  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:17.552933  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:17.633621  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:17.633667  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:20.175664  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:20.191495  585602 kubeadm.go:597] duration metric: took 4m4.568774806s to restartPrimaryControlPlane
	W1205 20:35:20.191570  585602 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 20:35:20.191594  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:35:20.660014  585602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:35:20.676684  585602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:35:20.688338  585602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:35:20.699748  585602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:35:20.699770  585602 kubeadm.go:157] found existing configuration files:
	
	I1205 20:35:20.699822  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:35:20.710417  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:35:20.710497  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:35:20.722295  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:35:20.732854  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:35:20.732933  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:35:20.744242  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:35:20.754593  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:35:20.754671  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:35:20.766443  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:35:20.777087  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:35:20.777157  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:35:20.788406  585602 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:35:20.869602  585602 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 20:35:20.869778  585602 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:35:21.022417  585602 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:35:21.022558  585602 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:35:21.022715  585602 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:35:21.213817  585602 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:35:21.216995  585602 out.go:235]   - Generating certificates and keys ...
	I1205 20:35:21.217146  585602 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:35:21.217240  585602 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:35:21.217373  585602 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:35:21.217502  585602 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:35:21.217614  585602 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:35:21.217699  585602 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 20:35:21.217784  585602 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:35:21.217876  585602 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:35:21.217985  585602 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:35:21.218129  585602 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:35:21.218186  585602 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 20:35:21.218289  585602 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:35:21.337924  585602 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:35:21.464355  585602 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:35:21.709734  585602 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:35:21.837040  585602 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:35:21.860767  585602 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:35:21.860894  585602 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:35:21.860934  585602 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:35:22.002564  585602 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:35:18.663978  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:20.665113  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:22.622593  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:25.120101  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:22.004407  585602 out.go:235]   - Booting up control plane ...
	I1205 20:35:22.004560  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:35:22.009319  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:35:22.010412  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:35:22.019041  585602 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:35:22.021855  585602 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:35:23.163493  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:25.164833  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:27.164914  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:27.619140  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:29.622476  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:29.664525  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:32.163413  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:34.411201  585113 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.37438104s)
	I1205 20:35:34.411295  585113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:35:34.428580  585113 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:35:34.439233  585113 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:35:34.450165  585113 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:35:34.450192  585113 kubeadm.go:157] found existing configuration files:
	
	I1205 20:35:34.450255  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:35:34.461910  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:35:34.461985  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:35:34.473936  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:35:34.484160  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:35:34.484240  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:35:34.495772  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:35:34.507681  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:35:34.507757  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:35:34.519932  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:35:34.532111  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:35:34.532190  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:35:34.543360  585113 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:35:34.594095  585113 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 20:35:34.594214  585113 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:35:34.712502  585113 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:35:34.712685  585113 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:35:34.712818  585113 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 20:35:34.729419  585113 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:35:34.731281  585113 out.go:235]   - Generating certificates and keys ...
	I1205 20:35:34.731395  585113 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:35:34.731486  585113 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:35:34.731614  585113 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:35:34.731715  585113 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:35:34.731812  585113 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:35:34.731902  585113 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 20:35:34.731994  585113 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:35:34.732082  585113 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:35:34.732179  585113 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:35:34.732252  585113 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:35:34.732336  585113 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 20:35:34.732428  585113 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:35:35.125135  585113 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:35:35.188591  585113 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 20:35:35.330713  585113 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:35:35.497785  585113 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:35:35.839010  585113 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:35:35.839656  585113 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:35:35.842311  585113 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:35:32.118898  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:34.119153  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:34.164007  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:36.164138  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:35.844403  585113 out.go:235]   - Booting up control plane ...
	I1205 20:35:35.844534  585113 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:35:35.844602  585113 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:35:35.845242  585113 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:35:35.865676  585113 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:35:35.871729  585113 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:35:35.871825  585113 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:35:36.007728  585113 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 20:35:36.007948  585113 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 20:35:36.510090  585113 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.141078ms
	I1205 20:35:36.510208  585113 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 20:35:36.119432  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:38.121093  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:40.620523  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:41.512166  585113 kubeadm.go:310] [api-check] The API server is healthy after 5.00243802s
	I1205 20:35:41.529257  585113 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:35:41.545958  585113 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:35:41.585500  585113 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:35:41.585726  585113 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-789000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:35:41.606394  585113 kubeadm.go:310] [bootstrap-token] Using token: j30n5x.myrhz9pya6yl1f1z
	I1205 20:35:41.608046  585113 out.go:235]   - Configuring RBAC rules ...
	I1205 20:35:41.608229  585113 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:35:41.616083  585113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:35:41.625777  585113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:35:41.629934  585113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:35:41.633726  585113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:35:41.640454  585113 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:35:41.923125  585113 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:35:42.363841  585113 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 20:35:42.924569  585113 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 20:35:42.924594  585113 kubeadm.go:310] 
	I1205 20:35:42.924660  585113 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 20:35:42.924668  585113 kubeadm.go:310] 
	I1205 20:35:42.924750  585113 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 20:35:42.924768  585113 kubeadm.go:310] 
	I1205 20:35:42.924802  585113 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 20:35:42.924865  585113 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:35:42.924926  585113 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:35:42.924969  585113 kubeadm.go:310] 
	I1205 20:35:42.925060  585113 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 20:35:42.925069  585113 kubeadm.go:310] 
	I1205 20:35:42.925120  585113 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:35:42.925154  585113 kubeadm.go:310] 
	I1205 20:35:42.925255  585113 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 20:35:42.925374  585113 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:35:42.925477  585113 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:35:42.925488  585113 kubeadm.go:310] 
	I1205 20:35:42.925604  585113 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:35:42.925691  585113 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 20:35:42.925701  585113 kubeadm.go:310] 
	I1205 20:35:42.925830  585113 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token j30n5x.myrhz9pya6yl1f1z \
	I1205 20:35:42.925966  585113 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 \
	I1205 20:35:42.926019  585113 kubeadm.go:310] 	--control-plane 
	I1205 20:35:42.926034  585113 kubeadm.go:310] 
	I1205 20:35:42.926136  585113 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:35:42.926147  585113 kubeadm.go:310] 
	I1205 20:35:42.926258  585113 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token j30n5x.myrhz9pya6yl1f1z \
	I1205 20:35:42.926400  585113 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 
	I1205 20:35:42.927105  585113 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:35:42.927269  585113 cni.go:84] Creating CNI manager for ""
	I1205 20:35:42.927283  585113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:35:42.929046  585113 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:35:38.164698  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:40.665499  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:42.930620  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:35:42.941706  585113 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:35:42.964041  585113 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:35:42.964154  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:42.964191  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-789000 minikube.k8s.io/updated_at=2024_12_05T20_35_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=embed-certs-789000 minikube.k8s.io/primary=true
	I1205 20:35:43.027876  585113 ops.go:34] apiserver oom_adj: -16
	I1205 20:35:43.203087  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:43.703446  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:44.203895  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:44.703277  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:45.203421  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:42.623820  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:45.118957  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:45.704129  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:46.203682  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:46.703213  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:47.203225  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:47.330051  585113 kubeadm.go:1113] duration metric: took 4.365966546s to wait for elevateKubeSystemPrivileges
	I1205 20:35:47.330104  585113 kubeadm.go:394] duration metric: took 4m57.530103825s to StartCluster
	I1205 20:35:47.330143  585113 settings.go:142] acquiring lock: {Name:mk53b9e6d652790a330d8f10370186624dd74692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:35:47.330296  585113 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:35:47.332937  585113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:35:47.333273  585113 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:35:47.333380  585113 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 20:35:47.333478  585113 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-789000"
	I1205 20:35:47.333500  585113 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-789000"
	I1205 20:35:47.333499  585113 addons.go:69] Setting default-storageclass=true in profile "embed-certs-789000"
	W1205 20:35:47.333510  585113 addons.go:243] addon storage-provisioner should already be in state true
	I1205 20:35:47.333523  585113 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-789000"
	I1205 20:35:47.333545  585113 host.go:66] Checking if "embed-certs-789000" exists ...
	I1205 20:35:47.333554  585113 config.go:182] Loaded profile config "embed-certs-789000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:35:47.333631  585113 addons.go:69] Setting metrics-server=true in profile "embed-certs-789000"
	I1205 20:35:47.333651  585113 addons.go:234] Setting addon metrics-server=true in "embed-certs-789000"
	W1205 20:35:47.333660  585113 addons.go:243] addon metrics-server should already be in state true
	I1205 20:35:47.333692  585113 host.go:66] Checking if "embed-certs-789000" exists ...
	I1205 20:35:47.334001  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.334043  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.334003  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.334101  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.334157  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.334339  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.335448  585113 out.go:177] * Verifying Kubernetes components...
	I1205 20:35:47.337056  585113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:35:47.353039  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33827
	I1205 20:35:47.353726  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.354437  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.354467  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.354870  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.355580  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.355654  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.355702  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43665
	I1205 20:35:47.355760  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46205
	I1205 20:35:47.356180  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.356224  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.356771  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.356796  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.356815  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.356834  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.357246  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.357245  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.357640  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetState
	I1205 20:35:47.357862  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.357916  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.361951  585113 addons.go:234] Setting addon default-storageclass=true in "embed-certs-789000"
	W1205 20:35:47.361974  585113 addons.go:243] addon default-storageclass should already be in state true
	I1205 20:35:47.362004  585113 host.go:66] Checking if "embed-certs-789000" exists ...
	I1205 20:35:47.362369  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.362416  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.372862  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37823
	I1205 20:35:47.373465  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.373983  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.374011  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.374347  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.374570  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetState
	I1205 20:35:47.376329  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:35:47.378476  585113 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:35:47.379882  585113 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:35:47.379909  585113 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:35:47.379933  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:35:47.382045  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44707
	I1205 20:35:47.382855  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.383440  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.383459  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.383563  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.383828  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.384092  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetState
	I1205 20:35:47.384101  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:35:47.384117  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.384150  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39829
	I1205 20:35:47.384381  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:35:47.384517  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:35:47.384635  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.384705  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:35:47.384850  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:35:47.385249  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.385262  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.385613  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.385744  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:35:47.386054  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.386085  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.387649  585113 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:35:43.164980  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:45.665449  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:47.665725  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:47.388998  585113 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:35:47.389011  585113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:35:47.389025  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:35:47.391724  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.392285  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:35:47.392317  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.392362  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:35:47.392521  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:35:47.392663  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:35:47.392804  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:35:47.402558  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45343
	I1205 20:35:47.403109  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.403636  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.403653  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.403977  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.404155  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetState
	I1205 20:35:47.405636  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:35:47.405859  585113 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:35:47.405876  585113 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:35:47.405894  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:35:47.408366  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.408827  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:35:47.408868  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.409107  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:35:47.409276  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:35:47.409436  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:35:47.409577  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:35:47.589046  585113 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:35:47.620164  585113 node_ready.go:35] waiting up to 6m0s for node "embed-certs-789000" to be "Ready" ...
	I1205 20:35:47.635800  585113 node_ready.go:49] node "embed-certs-789000" has status "Ready":"True"
	I1205 20:35:47.635824  585113 node_ready.go:38] duration metric: took 15.625152ms for node "embed-certs-789000" to be "Ready" ...
	I1205 20:35:47.635836  585113 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:35:47.647842  585113 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6mp2h" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:47.738529  585113 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:35:47.738558  585113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:35:47.741247  585113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:35:47.741443  585113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:35:47.822503  585113 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:35:47.822543  585113 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:35:47.886482  585113 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:35:47.886512  585113 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:35:47.926018  585113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:35:48.100013  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:48.100059  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:48.100371  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:48.100392  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:48.100408  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:48.100416  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:48.102261  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Closing plugin on server side
	I1205 20:35:48.102313  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:48.102342  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:48.115407  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:48.115429  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:48.115762  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:48.115859  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:48.115870  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Closing plugin on server side
	I1205 20:35:48.721035  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:48.721068  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:48.721380  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:48.721400  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:48.721447  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:48.721465  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:48.721855  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Closing plugin on server side
	I1205 20:35:48.721868  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:48.721880  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:49.294512  585113 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.36844122s)
	I1205 20:35:49.294581  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:49.294598  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:49.294953  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Closing plugin on server side
	I1205 20:35:49.295014  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:49.295028  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:49.295057  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:49.295071  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:49.295341  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Closing plugin on server side
	I1205 20:35:49.295391  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:49.295403  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:49.295414  585113 addons.go:475] Verifying addon metrics-server=true in "embed-certs-789000"
	I1205 20:35:49.297183  585113 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:35:49.298509  585113 addons.go:510] duration metric: took 1.965140064s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:35:49.657195  585113 pod_ready.go:103] pod "coredns-7c65d6cfc9-6mp2h" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:47.121445  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:49.622568  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:50.163712  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:52.165654  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:52.155012  585113 pod_ready.go:103] pod "coredns-7c65d6cfc9-6mp2h" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:54.155309  585113 pod_ready.go:93] pod "coredns-7c65d6cfc9-6mp2h" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:54.155346  585113 pod_ready.go:82] duration metric: took 6.507465102s for pod "coredns-7c65d6cfc9-6mp2h" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:54.155356  585113 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rh6pj" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:54.160866  585113 pod_ready.go:93] pod "coredns-7c65d6cfc9-rh6pj" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:54.160895  585113 pod_ready.go:82] duration metric: took 5.529623ms for pod "coredns-7c65d6cfc9-rh6pj" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:54.160909  585113 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:54.166444  585113 pod_ready.go:93] pod "etcd-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:54.166475  585113 pod_ready.go:82] duration metric: took 5.558605ms for pod "etcd-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:54.166487  585113 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:52.118202  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:54.119543  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:54.664661  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:57.162802  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:56.172832  585113 pod_ready.go:103] pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:57.173005  585113 pod_ready.go:93] pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:57.173052  585113 pod_ready.go:82] duration metric: took 3.006542827s for pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.173068  585113 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.178461  585113 pod_ready.go:93] pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:57.178489  585113 pod_ready.go:82] duration metric: took 5.413563ms for pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.178499  585113 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-znjpk" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.183130  585113 pod_ready.go:93] pod "kube-proxy-znjpk" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:57.183162  585113 pod_ready.go:82] duration metric: took 4.655743ms for pod "kube-proxy-znjpk" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.183178  585113 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.351816  585113 pod_ready.go:93] pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:57.351842  585113 pod_ready.go:82] duration metric: took 168.656328ms for pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.351851  585113 pod_ready.go:39] duration metric: took 9.716003373s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:35:57.351866  585113 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:35:57.351921  585113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:57.368439  585113 api_server.go:72] duration metric: took 10.035127798s to wait for apiserver process to appear ...
	I1205 20:35:57.368471  585113 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:35:57.368496  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:35:57.372531  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I1205 20:35:57.373449  585113 api_server.go:141] control plane version: v1.31.2
	I1205 20:35:57.373466  585113 api_server.go:131] duration metric: took 4.987422ms to wait for apiserver health ...
	I1205 20:35:57.373474  585113 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:35:57.554591  585113 system_pods.go:59] 9 kube-system pods found
	I1205 20:35:57.554620  585113 system_pods.go:61] "coredns-7c65d6cfc9-6mp2h" [01aaefd9-c549-4065-b3dd-a0e4d925e592] Running
	I1205 20:35:57.554625  585113 system_pods.go:61] "coredns-7c65d6cfc9-rh6pj" [4bdd8a47-abec-4dc4-a1ed-4a9a124417a3] Running
	I1205 20:35:57.554629  585113 system_pods.go:61] "etcd-embed-certs-789000" [356d7981-ab7a-40bf-866f-0285986f9a8d] Running
	I1205 20:35:57.554633  585113 system_pods.go:61] "kube-apiserver-embed-certs-789000" [bddc43d8-26f1-462b-a90b-8a4093bbb427] Running
	I1205 20:35:57.554637  585113 system_pods.go:61] "kube-controller-manager-embed-certs-789000" [800f92d7-e6e2-4cb8-9cc7-90595f4b512b] Running
	I1205 20:35:57.554640  585113 system_pods.go:61] "kube-proxy-znjpk" [f3df1a22-d7e0-4a83-84dd-0e710185ded6] Running
	I1205 20:35:57.554643  585113 system_pods.go:61] "kube-scheduler-embed-certs-789000" [327e3f02-3092-49fb-bfac-fc0485f02db3] Running
	I1205 20:35:57.554649  585113 system_pods.go:61] "metrics-server-6867b74b74-cs42k" [98b266c3-8ff0-4dc6-9c43-374dcd7c074a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:35:57.554653  585113 system_pods.go:61] "storage-provisioner" [2808c8da-8904-45a0-ae68-bfd68681540f] Running
	I1205 20:35:57.554660  585113 system_pods.go:74] duration metric: took 181.180919ms to wait for pod list to return data ...
	I1205 20:35:57.554667  585113 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:35:57.757196  585113 default_sa.go:45] found service account: "default"
	I1205 20:35:57.757226  585113 default_sa.go:55] duration metric: took 202.553823ms for default service account to be created ...
	I1205 20:35:57.757236  585113 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:35:57.956943  585113 system_pods.go:86] 9 kube-system pods found
	I1205 20:35:57.956976  585113 system_pods.go:89] "coredns-7c65d6cfc9-6mp2h" [01aaefd9-c549-4065-b3dd-a0e4d925e592] Running
	I1205 20:35:57.956982  585113 system_pods.go:89] "coredns-7c65d6cfc9-rh6pj" [4bdd8a47-abec-4dc4-a1ed-4a9a124417a3] Running
	I1205 20:35:57.956985  585113 system_pods.go:89] "etcd-embed-certs-789000" [356d7981-ab7a-40bf-866f-0285986f9a8d] Running
	I1205 20:35:57.956989  585113 system_pods.go:89] "kube-apiserver-embed-certs-789000" [bddc43d8-26f1-462b-a90b-8a4093bbb427] Running
	I1205 20:35:57.956992  585113 system_pods.go:89] "kube-controller-manager-embed-certs-789000" [800f92d7-e6e2-4cb8-9cc7-90595f4b512b] Running
	I1205 20:35:57.956996  585113 system_pods.go:89] "kube-proxy-znjpk" [f3df1a22-d7e0-4a83-84dd-0e710185ded6] Running
	I1205 20:35:57.956999  585113 system_pods.go:89] "kube-scheduler-embed-certs-789000" [327e3f02-3092-49fb-bfac-fc0485f02db3] Running
	I1205 20:35:57.957005  585113 system_pods.go:89] "metrics-server-6867b74b74-cs42k" [98b266c3-8ff0-4dc6-9c43-374dcd7c074a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:35:57.957010  585113 system_pods.go:89] "storage-provisioner" [2808c8da-8904-45a0-ae68-bfd68681540f] Running
	I1205 20:35:57.957019  585113 system_pods.go:126] duration metric: took 199.777723ms to wait for k8s-apps to be running ...
	I1205 20:35:57.957028  585113 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:35:57.957079  585113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:35:57.971959  585113 system_svc.go:56] duration metric: took 14.916307ms WaitForService to wait for kubelet
	I1205 20:35:57.972000  585113 kubeadm.go:582] duration metric: took 10.638693638s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:35:57.972027  585113 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:35:58.153272  585113 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:35:58.153302  585113 node_conditions.go:123] node cpu capacity is 2
	I1205 20:35:58.153323  585113 node_conditions.go:105] duration metric: took 181.282208ms to run NodePressure ...
	I1205 20:35:58.153338  585113 start.go:241] waiting for startup goroutines ...
	I1205 20:35:58.153348  585113 start.go:246] waiting for cluster config update ...
	I1205 20:35:58.153361  585113 start.go:255] writing updated cluster config ...
	I1205 20:35:58.153689  585113 ssh_runner.go:195] Run: rm -f paused
	I1205 20:35:58.206377  585113 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 20:35:58.208199  585113 out.go:177] * Done! kubectl is now configured to use "embed-certs-789000" cluster and "default" namespace by default
	I1205 20:35:56.626799  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:59.119621  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:59.164803  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:01.663254  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:01.119680  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:03.121023  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:05.121537  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:02.025194  585602 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 20:36:02.025306  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:36:02.025498  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:36:03.664172  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:05.672410  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:07.623229  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:10.119845  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:07.025608  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:36:07.025922  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:36:08.164875  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:10.665374  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:12.622566  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:15.120084  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:13.163662  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:15.164021  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:17.164514  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:17.619629  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:19.620524  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:17.026490  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:36:17.026747  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:36:19.663904  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:22.164514  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:21.621019  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:24.119524  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:24.164932  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:26.670748  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:26.119795  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:27.113870  585025 pod_ready.go:82] duration metric: took 4m0.000886242s for pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace to be "Ready" ...
	E1205 20:36:27.113920  585025 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace to be "Ready" (will not retry!)
	I1205 20:36:27.113943  585025 pod_ready.go:39] duration metric: took 4m14.547292745s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:36:27.113975  585025 kubeadm.go:597] duration metric: took 4m21.939840666s to restartPrimaryControlPlane
	W1205 20:36:27.114068  585025 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 20:36:27.114099  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:36:29.163499  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:29.664158  585929 pod_ready.go:82] duration metric: took 4m0.007168384s for pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace to be "Ready" ...
	E1205 20:36:29.664191  585929 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 20:36:29.664201  585929 pod_ready.go:39] duration metric: took 4m2.00733866s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:36:29.664226  585929 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:36:29.664290  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:36:29.664377  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:36:29.712790  585929 cri.go:89] found id: "83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:29.712814  585929 cri.go:89] found id: "e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:29.712819  585929 cri.go:89] found id: ""
	I1205 20:36:29.712826  585929 logs.go:282] 2 containers: [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36]
	I1205 20:36:29.712879  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.717751  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.721968  585929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:36:29.722045  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:36:29.770289  585929 cri.go:89] found id: "62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:29.770322  585929 cri.go:89] found id: ""
	I1205 20:36:29.770330  585929 logs.go:282] 1 containers: [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff]
	I1205 20:36:29.770392  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.775391  585929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:36:29.775475  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:36:29.816354  585929 cri.go:89] found id: "dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:29.816380  585929 cri.go:89] found id: ""
	I1205 20:36:29.816388  585929 logs.go:282] 1 containers: [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f]
	I1205 20:36:29.816454  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.821546  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:36:29.821621  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:36:29.870442  585929 cri.go:89] found id: "40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:29.870467  585929 cri.go:89] found id: ""
	I1205 20:36:29.870476  585929 logs.go:282] 1 containers: [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d]
	I1205 20:36:29.870541  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.875546  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:36:29.875658  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:36:29.924567  585929 cri.go:89] found id: "444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:29.924595  585929 cri.go:89] found id: ""
	I1205 20:36:29.924603  585929 logs.go:282] 1 containers: [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43]
	I1205 20:36:29.924666  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.929148  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:36:29.929216  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:36:29.968092  585929 cri.go:89] found id: "18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:29.968122  585929 cri.go:89] found id: "587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:29.968126  585929 cri.go:89] found id: ""
	I1205 20:36:29.968134  585929 logs.go:282] 2 containers: [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66]
	I1205 20:36:29.968186  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.973062  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.977693  585929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:36:29.977762  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:36:30.014944  585929 cri.go:89] found id: ""
	I1205 20:36:30.014982  585929 logs.go:282] 0 containers: []
	W1205 20:36:30.014994  585929 logs.go:284] No container was found matching "kindnet"
	I1205 20:36:30.015002  585929 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:36:30.015101  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:36:30.062304  585929 cri.go:89] found id: "e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:30.062328  585929 cri.go:89] found id: "dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:30.062332  585929 cri.go:89] found id: ""
	I1205 20:36:30.062339  585929 logs.go:282] 2 containers: [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c]
	I1205 20:36:30.062394  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:30.067152  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:30.071767  585929 logs.go:123] Gathering logs for kube-apiserver [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d] ...
	I1205 20:36:30.071788  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:30.125030  585929 logs.go:123] Gathering logs for etcd [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff] ...
	I1205 20:36:30.125069  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:30.167607  585929 logs.go:123] Gathering logs for kube-scheduler [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d] ...
	I1205 20:36:30.167641  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:30.217522  585929 logs.go:123] Gathering logs for kube-controller-manager [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c] ...
	I1205 20:36:30.217558  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:30.298655  585929 logs.go:123] Gathering logs for kube-controller-manager [587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66] ...
	I1205 20:36:30.298695  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:30.346687  585929 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:36:30.346721  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:36:30.887069  585929 logs.go:123] Gathering logs for dmesg ...
	I1205 20:36:30.887126  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:36:30.907313  585929 logs.go:123] Gathering logs for kube-apiserver [e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36] ...
	I1205 20:36:30.907360  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:30.950285  585929 logs.go:123] Gathering logs for coredns [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f] ...
	I1205 20:36:30.950326  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:30.990895  585929 logs.go:123] Gathering logs for storage-provisioner [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8] ...
	I1205 20:36:30.990929  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:31.032950  585929 logs.go:123] Gathering logs for kubelet ...
	I1205 20:36:31.033010  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:36:31.115132  585929 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:36:31.115176  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:36:31.257760  585929 logs.go:123] Gathering logs for kube-proxy [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43] ...
	I1205 20:36:31.257797  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:31.300521  585929 logs.go:123] Gathering logs for storage-provisioner [dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c] ...
	I1205 20:36:31.300553  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:31.338339  585929 logs.go:123] Gathering logs for container status ...
	I1205 20:36:31.338373  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:36:33.892406  585929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:36:33.908917  585929 api_server.go:72] duration metric: took 4m14.472283422s to wait for apiserver process to appear ...
	I1205 20:36:33.908950  585929 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:36:33.908993  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:36:33.909067  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:36:33.958461  585929 cri.go:89] found id: "83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:33.958496  585929 cri.go:89] found id: "e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:33.958502  585929 cri.go:89] found id: ""
	I1205 20:36:33.958511  585929 logs.go:282] 2 containers: [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36]
	I1205 20:36:33.958585  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:33.963333  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:33.969472  585929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:36:33.969549  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:36:34.010687  585929 cri.go:89] found id: "62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:34.010711  585929 cri.go:89] found id: ""
	I1205 20:36:34.010721  585929 logs.go:282] 1 containers: [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff]
	I1205 20:36:34.010790  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.016468  585929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:36:34.016557  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:36:34.056627  585929 cri.go:89] found id: "dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:34.056656  585929 cri.go:89] found id: ""
	I1205 20:36:34.056666  585929 logs.go:282] 1 containers: [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f]
	I1205 20:36:34.056729  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.061343  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:36:34.061411  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:36:34.099534  585929 cri.go:89] found id: "40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:34.099563  585929 cri.go:89] found id: ""
	I1205 20:36:34.099573  585929 logs.go:282] 1 containers: [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d]
	I1205 20:36:34.099643  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.104828  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:36:34.104891  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:36:34.150749  585929 cri.go:89] found id: "444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:34.150781  585929 cri.go:89] found id: ""
	I1205 20:36:34.150792  585929 logs.go:282] 1 containers: [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43]
	I1205 20:36:34.150863  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.155718  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:36:34.155797  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:36:34.202896  585929 cri.go:89] found id: "18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:34.202927  585929 cri.go:89] found id: "587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:34.202934  585929 cri.go:89] found id: ""
	I1205 20:36:34.202943  585929 logs.go:282] 2 containers: [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66]
	I1205 20:36:34.203028  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.207791  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.212163  585929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:36:34.212243  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:36:34.254423  585929 cri.go:89] found id: ""
	I1205 20:36:34.254458  585929 logs.go:282] 0 containers: []
	W1205 20:36:34.254470  585929 logs.go:284] No container was found matching "kindnet"
	I1205 20:36:34.254479  585929 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:36:34.254549  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:36:34.294704  585929 cri.go:89] found id: "e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:34.294737  585929 cri.go:89] found id: "dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:34.294741  585929 cri.go:89] found id: ""
	I1205 20:36:34.294753  585929 logs.go:282] 2 containers: [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c]
	I1205 20:36:34.294820  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.299361  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.305411  585929 logs.go:123] Gathering logs for kube-apiserver [e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36] ...
	I1205 20:36:34.305437  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:34.357438  585929 logs.go:123] Gathering logs for etcd [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff] ...
	I1205 20:36:34.357472  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:34.405858  585929 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:36:34.405893  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:36:34.898506  585929 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:36:34.898551  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:36:35.009818  585929 logs.go:123] Gathering logs for coredns [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f] ...
	I1205 20:36:35.009856  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:35.048852  585929 logs.go:123] Gathering logs for kube-controller-manager [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c] ...
	I1205 20:36:35.048882  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:35.100458  585929 logs.go:123] Gathering logs for kube-controller-manager [587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66] ...
	I1205 20:36:35.100511  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:35.139923  585929 logs.go:123] Gathering logs for container status ...
	I1205 20:36:35.139959  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:36:35.184818  585929 logs.go:123] Gathering logs for kubelet ...
	I1205 20:36:35.184852  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:36:35.265196  585929 logs.go:123] Gathering logs for dmesg ...
	I1205 20:36:35.265238  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:36:35.280790  585929 logs.go:123] Gathering logs for kube-proxy [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43] ...
	I1205 20:36:35.280830  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:35.323308  585929 logs.go:123] Gathering logs for storage-provisioner [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8] ...
	I1205 20:36:35.323343  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:35.364578  585929 logs.go:123] Gathering logs for kube-apiserver [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d] ...
	I1205 20:36:35.364610  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:35.411413  585929 logs.go:123] Gathering logs for kube-scheduler [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d] ...
	I1205 20:36:35.411456  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:35.458077  585929 logs.go:123] Gathering logs for storage-provisioner [dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c] ...
	I1205 20:36:35.458117  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:37.997701  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:36:38.003308  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 200:
	ok
	I1205 20:36:38.004465  585929 api_server.go:141] control plane version: v1.31.2
	I1205 20:36:38.004495  585929 api_server.go:131] duration metric: took 4.095536578s to wait for apiserver health ...
	I1205 20:36:38.004505  585929 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:36:38.004532  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:36:38.004598  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:36:37.027599  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:36:37.027910  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:36:38.048388  585929 cri.go:89] found id: "83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:38.048427  585929 cri.go:89] found id: "e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:38.048434  585929 cri.go:89] found id: ""
	I1205 20:36:38.048442  585929 logs.go:282] 2 containers: [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36]
	I1205 20:36:38.048514  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.052931  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.057338  585929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:36:38.057403  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:36:38.097715  585929 cri.go:89] found id: "62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:38.097750  585929 cri.go:89] found id: ""
	I1205 20:36:38.097761  585929 logs.go:282] 1 containers: [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff]
	I1205 20:36:38.097830  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.104038  585929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:36:38.104110  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:36:38.148485  585929 cri.go:89] found id: "dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:38.148510  585929 cri.go:89] found id: ""
	I1205 20:36:38.148519  585929 logs.go:282] 1 containers: [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f]
	I1205 20:36:38.148585  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.153619  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:36:38.153702  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:36:38.190467  585929 cri.go:89] found id: "40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:38.190495  585929 cri.go:89] found id: ""
	I1205 20:36:38.190505  585929 logs.go:282] 1 containers: [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d]
	I1205 20:36:38.190561  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.195177  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:36:38.195259  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:36:38.240020  585929 cri.go:89] found id: "444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:38.240045  585929 cri.go:89] found id: ""
	I1205 20:36:38.240054  585929 logs.go:282] 1 containers: [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43]
	I1205 20:36:38.240123  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.244359  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:36:38.244425  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:36:38.282241  585929 cri.go:89] found id: "18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:38.282267  585929 cri.go:89] found id: "587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:38.282284  585929 cri.go:89] found id: ""
	I1205 20:36:38.282292  585929 logs.go:282] 2 containers: [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66]
	I1205 20:36:38.282357  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.287437  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.291561  585929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:36:38.291621  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:36:38.333299  585929 cri.go:89] found id: ""
	I1205 20:36:38.333335  585929 logs.go:282] 0 containers: []
	W1205 20:36:38.333345  585929 logs.go:284] No container was found matching "kindnet"
	I1205 20:36:38.333352  585929 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:36:38.333411  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:36:38.370920  585929 cri.go:89] found id: "e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:38.370948  585929 cri.go:89] found id: "dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:38.370952  585929 cri.go:89] found id: ""
	I1205 20:36:38.370960  585929 logs.go:282] 2 containers: [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c]
	I1205 20:36:38.371037  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.375549  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.379517  585929 logs.go:123] Gathering logs for kube-controller-manager [587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66] ...
	I1205 20:36:38.379548  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:38.416990  585929 logs.go:123] Gathering logs for kubelet ...
	I1205 20:36:38.417023  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:36:38.499859  585929 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:36:38.499905  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:36:38.625291  585929 logs.go:123] Gathering logs for kube-scheduler [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d] ...
	I1205 20:36:38.625332  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:38.672549  585929 logs.go:123] Gathering logs for coredns [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f] ...
	I1205 20:36:38.672586  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:38.710017  585929 logs.go:123] Gathering logs for storage-provisioner [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8] ...
	I1205 20:36:38.710055  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:38.754004  585929 logs.go:123] Gathering logs for container status ...
	I1205 20:36:38.754049  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:36:38.802163  585929 logs.go:123] Gathering logs for dmesg ...
	I1205 20:36:38.802206  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:36:38.817670  585929 logs.go:123] Gathering logs for kube-apiserver [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d] ...
	I1205 20:36:38.817704  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:38.864833  585929 logs.go:123] Gathering logs for etcd [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff] ...
	I1205 20:36:38.864875  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:38.909490  585929 logs.go:123] Gathering logs for storage-provisioner [dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c] ...
	I1205 20:36:38.909526  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:38.952117  585929 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:36:38.952164  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:36:39.347620  585929 logs.go:123] Gathering logs for kube-apiserver [e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36] ...
	I1205 20:36:39.347686  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:39.392412  585929 logs.go:123] Gathering logs for kube-proxy [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43] ...
	I1205 20:36:39.392450  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:39.433711  585929 logs.go:123] Gathering logs for kube-controller-manager [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c] ...
	I1205 20:36:39.433749  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:41.996602  585929 system_pods.go:59] 8 kube-system pods found
	I1205 20:36:41.996634  585929 system_pods.go:61] "coredns-7c65d6cfc9-5drgc" [4adbcbc8-0974-4ed3-90d4-fc7f75ff83b6] Running
	I1205 20:36:41.996640  585929 system_pods.go:61] "etcd-default-k8s-diff-port-942599" [4041a965-abf4-45b3-a180-118601e72573] Running
	I1205 20:36:41.996644  585929 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-942599" [ae1d7788-4feb-4e02-b0b2-bcaff984ff99] Running
	I1205 20:36:41.996648  585929 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-942599" [5cfb734e-5a10-4066-95a1-b884817a0aea] Running
	I1205 20:36:41.996651  585929 system_pods.go:61] "kube-proxy-5vdcq" [be2e18fd-6980-45c9-87a4-f6d1ed31bf7b] Running
	I1205 20:36:41.996654  585929 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-942599" [8deda727-a6c3-4523-8755-76217f6a8ddb] Running
	I1205 20:36:41.996661  585929 system_pods.go:61] "metrics-server-6867b74b74-rq8xm" [99b577fd-fbfd-4178-8b06-ef96f118c30b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:36:41.996665  585929 system_pods.go:61] "storage-provisioner" [8a858ec2-dc10-4501-8efa-72e2ea0c7927] Running
	I1205 20:36:41.996674  585929 system_pods.go:74] duration metric: took 3.992162062s to wait for pod list to return data ...
	I1205 20:36:41.996682  585929 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:36:41.999553  585929 default_sa.go:45] found service account: "default"
	I1205 20:36:41.999580  585929 default_sa.go:55] duration metric: took 2.889197ms for default service account to be created ...
	I1205 20:36:41.999589  585929 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:36:42.005061  585929 system_pods.go:86] 8 kube-system pods found
	I1205 20:36:42.005099  585929 system_pods.go:89] "coredns-7c65d6cfc9-5drgc" [4adbcbc8-0974-4ed3-90d4-fc7f75ff83b6] Running
	I1205 20:36:42.005111  585929 system_pods.go:89] "etcd-default-k8s-diff-port-942599" [4041a965-abf4-45b3-a180-118601e72573] Running
	I1205 20:36:42.005118  585929 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-942599" [ae1d7788-4feb-4e02-b0b2-bcaff984ff99] Running
	I1205 20:36:42.005126  585929 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-942599" [5cfb734e-5a10-4066-95a1-b884817a0aea] Running
	I1205 20:36:42.005135  585929 system_pods.go:89] "kube-proxy-5vdcq" [be2e18fd-6980-45c9-87a4-f6d1ed31bf7b] Running
	I1205 20:36:42.005143  585929 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-942599" [8deda727-a6c3-4523-8755-76217f6a8ddb] Running
	I1205 20:36:42.005159  585929 system_pods.go:89] "metrics-server-6867b74b74-rq8xm" [99b577fd-fbfd-4178-8b06-ef96f118c30b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:36:42.005171  585929 system_pods.go:89] "storage-provisioner" [8a858ec2-dc10-4501-8efa-72e2ea0c7927] Running
	I1205 20:36:42.005187  585929 system_pods.go:126] duration metric: took 5.591652ms to wait for k8s-apps to be running ...
	I1205 20:36:42.005201  585929 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:36:42.005267  585929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:36:42.021323  585929 system_svc.go:56] duration metric: took 16.10852ms WaitForService to wait for kubelet
	I1205 20:36:42.021358  585929 kubeadm.go:582] duration metric: took 4m22.584731606s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:36:42.021424  585929 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:36:42.024632  585929 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:36:42.024658  585929 node_conditions.go:123] node cpu capacity is 2
	I1205 20:36:42.024682  585929 node_conditions.go:105] duration metric: took 3.248548ms to run NodePressure ...
	I1205 20:36:42.024698  585929 start.go:241] waiting for startup goroutines ...
	I1205 20:36:42.024709  585929 start.go:246] waiting for cluster config update ...
	I1205 20:36:42.024742  585929 start.go:255] writing updated cluster config ...
	I1205 20:36:42.025047  585929 ssh_runner.go:195] Run: rm -f paused
	I1205 20:36:42.077303  585929 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 20:36:42.079398  585929 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-942599" cluster and "default" namespace by default
	I1205 20:36:53.411276  585025 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.297141231s)
	I1205 20:36:53.411423  585025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:36:53.432474  585025 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:36:53.443908  585025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:36:53.454789  585025 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:36:53.454821  585025 kubeadm.go:157] found existing configuration files:
	
	I1205 20:36:53.454873  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:36:53.465648  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:36:53.465719  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:36:53.476492  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:36:53.486436  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:36:53.486505  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:36:53.499146  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:36:53.510237  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:36:53.510324  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:36:53.521186  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:36:53.531797  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:36:53.531890  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:36:53.543056  585025 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:36:53.735019  585025 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:37:01.531096  585025 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 20:37:01.531179  585025 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:37:01.531278  585025 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:37:01.531407  585025 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:37:01.531546  585025 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 20:37:01.531635  585025 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:37:01.533284  585025 out.go:235]   - Generating certificates and keys ...
	I1205 20:37:01.533400  585025 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:37:01.533484  585025 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:37:01.533589  585025 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:37:01.533676  585025 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:37:01.533741  585025 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:37:01.533820  585025 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 20:37:01.533901  585025 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:37:01.533954  585025 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:37:01.534023  585025 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:37:01.534097  585025 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:37:01.534137  585025 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 20:37:01.534193  585025 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:37:01.534264  585025 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:37:01.534347  585025 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 20:37:01.534414  585025 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:37:01.534479  585025 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:37:01.534529  585025 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:37:01.534600  585025 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:37:01.534656  585025 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:37:01.536208  585025 out.go:235]   - Booting up control plane ...
	I1205 20:37:01.536326  585025 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:37:01.536394  585025 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:37:01.536487  585025 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:37:01.536653  585025 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:37:01.536772  585025 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:37:01.536814  585025 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:37:01.536987  585025 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 20:37:01.537144  585025 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 20:37:01.537240  585025 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.640403ms
	I1205 20:37:01.537352  585025 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 20:37:01.537438  585025 kubeadm.go:310] [api-check] The API server is healthy after 5.002069704s
	I1205 20:37:01.537566  585025 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:37:01.537705  585025 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:37:01.537766  585025 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:37:01.537959  585025 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-816185 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:37:01.538037  585025 kubeadm.go:310] [bootstrap-token] Using token: l8cx4j.koqnwrdaqrc08irs
	I1205 20:37:01.539683  585025 out.go:235]   - Configuring RBAC rules ...
	I1205 20:37:01.539813  585025 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:37:01.539945  585025 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:37:01.540157  585025 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:37:01.540346  585025 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:37:01.540482  585025 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:37:01.540602  585025 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:37:01.540746  585025 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:37:01.540818  585025 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 20:37:01.540905  585025 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 20:37:01.540922  585025 kubeadm.go:310] 
	I1205 20:37:01.541012  585025 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 20:37:01.541027  585025 kubeadm.go:310] 
	I1205 20:37:01.541149  585025 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 20:37:01.541160  585025 kubeadm.go:310] 
	I1205 20:37:01.541197  585025 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 20:37:01.541253  585025 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:37:01.541297  585025 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:37:01.541303  585025 kubeadm.go:310] 
	I1205 20:37:01.541365  585025 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 20:37:01.541371  585025 kubeadm.go:310] 
	I1205 20:37:01.541417  585025 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:37:01.541427  585025 kubeadm.go:310] 
	I1205 20:37:01.541486  585025 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 20:37:01.541593  585025 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:37:01.541689  585025 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:37:01.541707  585025 kubeadm.go:310] 
	I1205 20:37:01.541811  585025 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:37:01.541917  585025 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 20:37:01.541928  585025 kubeadm.go:310] 
	I1205 20:37:01.542020  585025 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token l8cx4j.koqnwrdaqrc08irs \
	I1205 20:37:01.542138  585025 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 \
	I1205 20:37:01.542171  585025 kubeadm.go:310] 	--control-plane 
	I1205 20:37:01.542180  585025 kubeadm.go:310] 
	I1205 20:37:01.542264  585025 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:37:01.542283  585025 kubeadm.go:310] 
	I1205 20:37:01.542407  585025 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token l8cx4j.koqnwrdaqrc08irs \
	I1205 20:37:01.542513  585025 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 
	I1205 20:37:01.542530  585025 cni.go:84] Creating CNI manager for ""
	I1205 20:37:01.542538  585025 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:37:01.543967  585025 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:37:01.545652  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:37:01.557890  585025 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:37:01.577447  585025 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:37:01.577532  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-816185 minikube.k8s.io/updated_at=2024_12_05T20_37_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=no-preload-816185 minikube.k8s.io/primary=true
	I1205 20:37:01.577542  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:01.618121  585025 ops.go:34] apiserver oom_adj: -16
	I1205 20:37:01.806825  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:02.307212  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:02.807893  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:03.307202  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:03.806891  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:04.307571  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:04.807485  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:05.307695  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:05.387751  585025 kubeadm.go:1113] duration metric: took 3.810307917s to wait for elevateKubeSystemPrivileges
	I1205 20:37:05.387790  585025 kubeadm.go:394] duration metric: took 5m0.269375789s to StartCluster
	I1205 20:37:05.387810  585025 settings.go:142] acquiring lock: {Name:mk53b9e6d652790a330d8f10370186624dd74692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:37:05.387891  585025 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:37:05.389703  585025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:37:05.389984  585025 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.37 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:37:05.390056  585025 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 20:37:05.390179  585025 config.go:182] Loaded profile config "no-preload-816185": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:37:05.390193  585025 addons.go:69] Setting storage-provisioner=true in profile "no-preload-816185"
	I1205 20:37:05.390216  585025 addons.go:69] Setting default-storageclass=true in profile "no-preload-816185"
	I1205 20:37:05.390246  585025 addons.go:69] Setting metrics-server=true in profile "no-preload-816185"
	I1205 20:37:05.390281  585025 addons.go:234] Setting addon metrics-server=true in "no-preload-816185"
	W1205 20:37:05.390295  585025 addons.go:243] addon metrics-server should already be in state true
	I1205 20:37:05.390340  585025 host.go:66] Checking if "no-preload-816185" exists ...
	I1205 20:37:05.390255  585025 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-816185"
	I1205 20:37:05.390263  585025 addons.go:234] Setting addon storage-provisioner=true in "no-preload-816185"
	W1205 20:37:05.390463  585025 addons.go:243] addon storage-provisioner should already be in state true
	I1205 20:37:05.390533  585025 host.go:66] Checking if "no-preload-816185" exists ...
	I1205 20:37:05.390844  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.390888  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.390852  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.390947  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.390973  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.391032  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.391810  585025 out.go:177] * Verifying Kubernetes components...
	I1205 20:37:05.393274  585025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:37:05.408078  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40259
	I1205 20:37:05.408366  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I1205 20:37:05.408765  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.408780  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.409315  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.409337  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.409441  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.409465  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.409767  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.409800  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.409941  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetState
	I1205 20:37:05.410249  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42147
	I1205 20:37:05.410487  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.410537  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.410753  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.411387  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.411412  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.411847  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.412515  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.412565  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.413770  585025 addons.go:234] Setting addon default-storageclass=true in "no-preload-816185"
	W1205 20:37:05.413796  585025 addons.go:243] addon default-storageclass should already be in state true
	I1205 20:37:05.413828  585025 host.go:66] Checking if "no-preload-816185" exists ...
	I1205 20:37:05.414184  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.414231  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.430214  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33287
	I1205 20:37:05.430684  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.431260  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.431286  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.431697  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.431929  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetState
	I1205 20:37:05.432941  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36939
	I1205 20:37:05.433361  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.433835  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.433855  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.433933  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:37:05.434385  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.434596  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetState
	I1205 20:37:05.434638  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37163
	I1205 20:37:05.435193  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.435667  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.435694  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.435994  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.436000  585025 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:37:05.436635  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.436657  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:37:05.436683  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.437421  585025 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:37:05.437441  585025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:37:05.437461  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:37:05.438221  585025 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:37:05.439704  585025 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:37:05.439721  585025 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:37:05.439737  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:37:05.440522  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.441031  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:37:05.441058  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.441198  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:37:05.441352  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:37:05.441458  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:37:05.441582  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:37:05.445842  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.446223  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:37:05.446248  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.446449  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:37:05.446661  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:37:05.446806  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:37:05.446923  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:37:05.472870  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38029
	I1205 20:37:05.473520  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.474053  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.474080  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.474456  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.474666  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetState
	I1205 20:37:05.476603  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:37:05.476836  585025 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:37:05.476859  585025 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:37:05.476886  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:37:05.480063  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.480546  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:37:05.480580  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.480941  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:37:05.481175  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:37:05.481331  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:37:05.481425  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:37:05.607284  585025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:37:05.627090  585025 node_ready.go:35] waiting up to 6m0s for node "no-preload-816185" to be "Ready" ...
	I1205 20:37:05.637577  585025 node_ready.go:49] node "no-preload-816185" has status "Ready":"True"
	I1205 20:37:05.637602  585025 node_ready.go:38] duration metric: took 10.476209ms for node "no-preload-816185" to be "Ready" ...
	I1205 20:37:05.637611  585025 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:37:05.642969  585025 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:05.696662  585025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:37:05.725276  585025 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:37:05.725309  585025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:37:05.779102  585025 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:37:05.779137  585025 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:37:05.814495  585025 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:37:05.814531  585025 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:37:05.823828  585025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:37:05.863152  585025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:37:05.948854  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:05.948895  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:05.949242  585025 main.go:141] libmachine: (no-preload-816185) DBG | Closing plugin on server side
	I1205 20:37:05.949266  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:05.949275  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:05.949294  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:05.949302  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:05.949590  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:05.949601  585025 main.go:141] libmachine: (no-preload-816185) DBG | Closing plugin on server side
	I1205 20:37:05.949612  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:05.975655  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:05.975683  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:05.975962  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:05.975978  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:07.004027  585025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.180164032s)
	I1205 20:37:07.004103  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:07.004117  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:07.004498  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:07.004520  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:07.004535  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:07.004545  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:07.004802  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:07.004820  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:07.208032  585025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.344819218s)
	I1205 20:37:07.208143  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:07.208159  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:07.208537  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:07.208556  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:07.208566  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:07.208573  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:07.208846  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:07.208860  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:07.208871  585025 addons.go:475] Verifying addon metrics-server=true in "no-preload-816185"
	I1205 20:37:07.210487  585025 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:37:07.212093  585025 addons.go:510] duration metric: took 1.822047986s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:37:07.658678  585025 pod_ready.go:103] pod "etcd-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:37:08.156061  585025 pod_ready.go:93] pod "etcd-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:08.156094  585025 pod_ready.go:82] duration metric: took 2.513098547s for pod "etcd-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:08.156109  585025 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:10.162704  585025 pod_ready.go:103] pod "kube-apiserver-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:37:12.163550  585025 pod_ready.go:93] pod "kube-apiserver-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:12.163578  585025 pod_ready.go:82] duration metric: took 4.007461295s for pod "kube-apiserver-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:12.163601  585025 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:12.169123  585025 pod_ready.go:93] pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:12.169155  585025 pod_ready.go:82] duration metric: took 5.544964ms for pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:12.169170  585025 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:14.175288  585025 pod_ready.go:103] pod "kube-scheduler-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:37:14.676107  585025 pod_ready.go:93] pod "kube-scheduler-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:14.676137  585025 pod_ready.go:82] duration metric: took 2.506959209s for pod "kube-scheduler-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:14.676146  585025 pod_ready.go:39] duration metric: took 9.038525731s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:37:14.676165  585025 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:37:14.676222  585025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:37:14.692508  585025 api_server.go:72] duration metric: took 9.302489277s to wait for apiserver process to appear ...
	I1205 20:37:14.692540  585025 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:37:14.692562  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:37:14.697176  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 200:
	ok
	I1205 20:37:14.698320  585025 api_server.go:141] control plane version: v1.31.2
	I1205 20:37:14.698345  585025 api_server.go:131] duration metric: took 5.796971ms to wait for apiserver health ...
	I1205 20:37:14.698357  585025 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:37:14.706456  585025 system_pods.go:59] 9 kube-system pods found
	I1205 20:37:14.706503  585025 system_pods.go:61] "coredns-7c65d6cfc9-fmcnh" [fb6a91c8-af65-4fb6-af77-0a6c45d224a7] Running
	I1205 20:37:14.706512  585025 system_pods.go:61] "coredns-7c65d6cfc9-gmc2j" [2bfc0f96-5ad3-42c7-ab2c-4a29cbeab20f] Running
	I1205 20:37:14.706518  585025 system_pods.go:61] "etcd-no-preload-816185" [b647e785-c865-47d9-9215-4b92783df8f0] Running
	I1205 20:37:14.706524  585025 system_pods.go:61] "kube-apiserver-no-preload-816185" [a4d257bd-3d3b-4833-9edd-7a7f764d9482] Running
	I1205 20:37:14.706529  585025 system_pods.go:61] "kube-controller-manager-no-preload-816185" [0487e25d-77df-4ab1-81a0-18c09d1b7f60] Running
	I1205 20:37:14.706534  585025 system_pods.go:61] "kube-proxy-q8thq" [8be5b50a-e564-4d80-82c4-357db41a3c1e] Running
	I1205 20:37:14.706539  585025 system_pods.go:61] "kube-scheduler-no-preload-816185" [187898da-a8e3-4ce1-9f70-d581133bef49] Running
	I1205 20:37:14.706549  585025 system_pods.go:61] "metrics-server-6867b74b74-8vmd6" [d838e6e3-bd74-4653-9289-4f5375b03d4f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:37:14.706555  585025 system_pods.go:61] "storage-provisioner" [7f33e249-9330-428f-8feb-9f3cf44369be] Running
	I1205 20:37:14.706565  585025 system_pods.go:74] duration metric: took 8.200516ms to wait for pod list to return data ...
	I1205 20:37:14.706577  585025 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:37:14.716217  585025 default_sa.go:45] found service account: "default"
	I1205 20:37:14.716259  585025 default_sa.go:55] duration metric: took 9.664045ms for default service account to be created ...
	I1205 20:37:14.716293  585025 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:37:14.723293  585025 system_pods.go:86] 9 kube-system pods found
	I1205 20:37:14.723323  585025 system_pods.go:89] "coredns-7c65d6cfc9-fmcnh" [fb6a91c8-af65-4fb6-af77-0a6c45d224a7] Running
	I1205 20:37:14.723329  585025 system_pods.go:89] "coredns-7c65d6cfc9-gmc2j" [2bfc0f96-5ad3-42c7-ab2c-4a29cbeab20f] Running
	I1205 20:37:14.723333  585025 system_pods.go:89] "etcd-no-preload-816185" [b647e785-c865-47d9-9215-4b92783df8f0] Running
	I1205 20:37:14.723337  585025 system_pods.go:89] "kube-apiserver-no-preload-816185" [a4d257bd-3d3b-4833-9edd-7a7f764d9482] Running
	I1205 20:37:14.723342  585025 system_pods.go:89] "kube-controller-manager-no-preload-816185" [0487e25d-77df-4ab1-81a0-18c09d1b7f60] Running
	I1205 20:37:14.723346  585025 system_pods.go:89] "kube-proxy-q8thq" [8be5b50a-e564-4d80-82c4-357db41a3c1e] Running
	I1205 20:37:14.723349  585025 system_pods.go:89] "kube-scheduler-no-preload-816185" [187898da-a8e3-4ce1-9f70-d581133bef49] Running
	I1205 20:37:14.723355  585025 system_pods.go:89] "metrics-server-6867b74b74-8vmd6" [d838e6e3-bd74-4653-9289-4f5375b03d4f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:37:14.723360  585025 system_pods.go:89] "storage-provisioner" [7f33e249-9330-428f-8feb-9f3cf44369be] Running
	I1205 20:37:14.723368  585025 system_pods.go:126] duration metric: took 7.067824ms to wait for k8s-apps to be running ...
	I1205 20:37:14.723375  585025 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:37:14.723422  585025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:37:14.744142  585025 system_svc.go:56] duration metric: took 20.751867ms WaitForService to wait for kubelet
	I1205 20:37:14.744179  585025 kubeadm.go:582] duration metric: took 9.354165706s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:37:14.744200  585025 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:37:14.751985  585025 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:37:14.752026  585025 node_conditions.go:123] node cpu capacity is 2
	I1205 20:37:14.752043  585025 node_conditions.go:105] duration metric: took 7.836665ms to run NodePressure ...
	I1205 20:37:14.752069  585025 start.go:241] waiting for startup goroutines ...
	I1205 20:37:14.752081  585025 start.go:246] waiting for cluster config update ...
	I1205 20:37:14.752095  585025 start.go:255] writing updated cluster config ...
	I1205 20:37:14.752490  585025 ssh_runner.go:195] Run: rm -f paused
	I1205 20:37:14.806583  585025 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 20:37:14.808574  585025 out.go:177] * Done! kubectl is now configured to use "no-preload-816185" cluster and "default" namespace by default
	I1205 20:37:17.029681  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:37:17.029940  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:37:17.029963  585602 kubeadm.go:310] 
	I1205 20:37:17.030022  585602 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 20:37:17.030101  585602 kubeadm.go:310] 		timed out waiting for the condition
	I1205 20:37:17.030128  585602 kubeadm.go:310] 
	I1205 20:37:17.030167  585602 kubeadm.go:310] 	This error is likely caused by:
	I1205 20:37:17.030209  585602 kubeadm.go:310] 		- The kubelet is not running
	I1205 20:37:17.030353  585602 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 20:37:17.030369  585602 kubeadm.go:310] 
	I1205 20:37:17.030489  585602 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 20:37:17.030540  585602 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 20:37:17.030584  585602 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 20:37:17.030594  585602 kubeadm.go:310] 
	I1205 20:37:17.030733  585602 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 20:37:17.030843  585602 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 20:37:17.030855  585602 kubeadm.go:310] 
	I1205 20:37:17.031025  585602 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 20:37:17.031154  585602 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 20:37:17.031268  585602 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 20:37:17.031374  585602 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 20:37:17.031386  585602 kubeadm.go:310] 
	I1205 20:37:17.032368  585602 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:37:17.032493  585602 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 20:37:17.032562  585602 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1205 20:37:17.032709  585602 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 20:37:17.032762  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:37:17.518572  585602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:37:17.533868  585602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:37:17.547199  585602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:37:17.547224  585602 kubeadm.go:157] found existing configuration files:
	
	I1205 20:37:17.547272  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:37:17.556733  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:37:17.556801  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:37:17.566622  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:37:17.577044  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:37:17.577121  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:37:17.588726  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:37:17.599269  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:37:17.599346  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:37:17.609243  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:37:17.618947  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:37:17.619034  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:37:17.629228  585602 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:37:17.878785  585602 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:39:13.972213  585602 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 20:39:13.972379  585602 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1205 20:39:13.973936  585602 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 20:39:13.974035  585602 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:39:13.974150  585602 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:39:13.974251  585602 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:39:13.974341  585602 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:39:13.974404  585602 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:39:13.976164  585602 out.go:235]   - Generating certificates and keys ...
	I1205 20:39:13.976248  585602 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:39:13.976339  585602 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:39:13.976449  585602 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:39:13.976538  585602 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:39:13.976642  585602 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:39:13.976736  585602 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 20:39:13.976832  585602 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:39:13.976924  585602 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:39:13.977025  585602 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:39:13.977131  585602 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:39:13.977189  585602 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 20:39:13.977272  585602 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:39:13.977389  585602 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:39:13.977474  585602 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:39:13.977566  585602 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:39:13.977650  585602 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:39:13.977776  585602 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:39:13.977901  585602 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:39:13.977976  585602 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:39:13.978137  585602 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:39:13.979473  585602 out.go:235]   - Booting up control plane ...
	I1205 20:39:13.979581  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:39:13.979664  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:39:13.979732  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:39:13.979803  585602 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:39:13.979952  585602 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:39:13.980017  585602 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 20:39:13.980107  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.980396  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.980511  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.980744  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.980843  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.981116  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.981227  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.981439  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.981528  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.981718  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.981731  585602 kubeadm.go:310] 
	I1205 20:39:13.981773  585602 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 20:39:13.981831  585602 kubeadm.go:310] 		timed out waiting for the condition
	I1205 20:39:13.981839  585602 kubeadm.go:310] 
	I1205 20:39:13.981888  585602 kubeadm.go:310] 	This error is likely caused by:
	I1205 20:39:13.981941  585602 kubeadm.go:310] 		- The kubelet is not running
	I1205 20:39:13.982052  585602 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 20:39:13.982059  585602 kubeadm.go:310] 
	I1205 20:39:13.982144  585602 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 20:39:13.982174  585602 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 20:39:13.982208  585602 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 20:39:13.982215  585602 kubeadm.go:310] 
	I1205 20:39:13.982302  585602 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 20:39:13.982415  585602 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 20:39:13.982431  585602 kubeadm.go:310] 
	I1205 20:39:13.982540  585602 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 20:39:13.982618  585602 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 20:39:13.982701  585602 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 20:39:13.982766  585602 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 20:39:13.982839  585602 kubeadm.go:310] 
	I1205 20:39:13.982855  585602 kubeadm.go:394] duration metric: took 7m58.414377536s to StartCluster
	I1205 20:39:13.982907  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:39:13.982975  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:39:14.031730  585602 cri.go:89] found id: ""
	I1205 20:39:14.031767  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.031779  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:39:14.031791  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:39:14.031865  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:39:14.068372  585602 cri.go:89] found id: ""
	I1205 20:39:14.068420  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.068433  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:39:14.068440  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:39:14.068512  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:39:14.106807  585602 cri.go:89] found id: ""
	I1205 20:39:14.106837  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.106847  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:39:14.106856  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:39:14.106930  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:39:14.144926  585602 cri.go:89] found id: ""
	I1205 20:39:14.144952  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.144960  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:39:14.144974  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:39:14.145052  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:39:14.182712  585602 cri.go:89] found id: ""
	I1205 20:39:14.182742  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.182754  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:39:14.182762  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:39:14.182826  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:39:14.220469  585602 cri.go:89] found id: ""
	I1205 20:39:14.220505  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.220519  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:39:14.220527  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:39:14.220593  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:39:14.269791  585602 cri.go:89] found id: ""
	I1205 20:39:14.269823  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.269835  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:39:14.269842  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:39:14.269911  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:39:14.313406  585602 cri.go:89] found id: ""
	I1205 20:39:14.313439  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.313450  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:39:14.313464  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:39:14.313483  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:39:14.330488  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:39:14.330526  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:39:14.417358  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:39:14.417403  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:39:14.417421  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:39:14.530226  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:39:14.530270  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:39:14.585471  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:39:14.585512  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 20:39:14.636389  585602 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1205 20:39:14.636456  585602 out.go:270] * 
	W1205 20:39:14.636535  585602 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 20:39:14.636549  585602 out.go:270] * 
	W1205 20:39:14.637475  585602 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 20:39:14.640654  585602 out.go:201] 
	W1205 20:39:14.641873  585602 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 20:39:14.641931  585602 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 20:39:14.641975  585602 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 20:39:14.643389  585602 out.go:201] 
	
	
	==> CRI-O <==
	Dec 05 20:46:16 no-preload-816185 crio[716]: time="2024-12-05 20:46:16.915671472Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:931006d780b13a9308d6d2327c9b419e91bffeb3de9e5935cbe02b0851d15e4e,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7f33e249-9330-428f-8feb-9f3cf44369be,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733431027322408407,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f33e249-9330-428f-8feb-9f3cf44369be,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-
system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-12-05T20:37:07.001695584Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:33f534c874be20f01361921642bf978866141dfa6b2ce262c522ea2f7a906676,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-8vmd6,Uid:d838e6e3-bd74-4653-9289-4f5375b03d4f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733431027268410625,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-8vmd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d838e6e3-bd74-4653-9289-4f5375b03d4f
,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T20:37:06.948262142Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:53a99c02594aa0576b861bf3e66787ace2d67583a1f434a0f7d096ec8b3759d4,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-gmc2j,Uid:2bfc0f96-5ad3-42c7-ab2c-4a29cbeab20f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733431026384037191,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-gmc2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bfc0f96-5ad3-42c7-ab2c-4a29cbeab20f,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T20:37:06.076394890Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:66f4764fbff897d16e10a89a56a33bde2486c315a561e8d9085731c7739aee88,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-fmcnh,Uid:fb6a91c8-af65-4fb6-
af77-0a6c45d224a7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733431026308392649,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-fmcnh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb6a91c8-af65-4fb6-af77-0a6c45d224a7,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T20:37:05.999357503Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9d9ec7600f03c33be405dc2c489181ae902a5cfceea04e1efa0dd1cb864461b7,Metadata:&PodSandboxMetadata{Name:kube-proxy-q8thq,Uid:8be5b50a-e564-4d80-82c4-357db41a3c1e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733431026251394143,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-q8thq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be5b50a-e564-4d80-82c4-357db41a3c1e,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T20:37:05.912759343Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:35edf78f25c164581f5fe53dc04477d4f7b5f86a809070c06c6e8a6195e80344,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-816185,Uid:da8680bb881144cc526df7f123fe0e95,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1733431015391792298,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8680bb881144cc526df7f123fe0e95,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.37:8443,kubernetes.io/config.hash: da8680bb881144cc526df7f123fe0e95,kubernetes.io/config.seen: 2024-12-05T20:36:54.928636544Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b7277a915edeb0280426b492ba4ac082
dfb03f2c3487f931267e7922d51923e9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-816185,Uid:506e81d12c5f83cd43b2eff2f0c3d34c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733431015390344854,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506e81d12c5f83cd43b2eff2f0c3d34c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 506e81d12c5f83cd43b2eff2f0c3d34c,kubernetes.io/config.seen: 2024-12-05T20:36:54.928638547Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:84131f77c9d95245453f83e0492a9caf58e571c574c74cdcfabf14f033e3d065,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-816185,Uid:9aa5f7ec329fe85df7db1b6e2f2e8ca6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733431015380220784,Labels:map[string]string{component: kube-sche
duler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa5f7ec329fe85df7db1b6e2f2e8ca6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9aa5f7ec329fe85df7db1b6e2f2e8ca6,kubernetes.io/config.seen: 2024-12-05T20:36:54.928640184Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e91f3f0c1dbac00ec563b1a5bee614262901ddf614869a170c805a03422d225e,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-816185,Uid:3e666b83de89497cad0416a7019a3f69,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733431015379675552,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e666b83de89497cad0416a7019a3f69,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.37:2379,
kubernetes.io/config.hash: 3e666b83de89497cad0416a7019a3f69,kubernetes.io/config.seen: 2024-12-05T20:36:54.928631672Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=8d359b2c-b6ec-44a1-b7db-49984631e251 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 05 20:46:16 no-preload-816185 crio[716]: time="2024-12-05 20:46:16.916907268Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=245c649e-f2f2-439b-947e-f3d8a8ed12b8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:46:16 no-preload-816185 crio[716]: time="2024-12-05 20:46:16.916986840Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=245c649e-f2f2-439b-947e-f3d8a8ed12b8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:46:16 no-preload-816185 crio[716]: time="2024-12-05 20:46:16.917318112Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92c0f24978e39c68485fd113a97875a68cbb55f2f341d2ed2baf5b273a694d58,PodSandboxId:931006d780b13a9308d6d2327c9b419e91bffeb3de9e5935cbe02b0851d15e4e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733431027656385078,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f33e249-9330-428f-8feb-9f3cf44369be,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ae48ce56e049c518439e1342b38f928063d63a4a96344c4ef2c1bcca644865,PodSandboxId:53a99c02594aa0576b861bf3e66787ace2d67583a1f434a0f7d096ec8b3759d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733431027162024700,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gmc2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bfc0f96-5ad3-42c7-ab2c-4a29cbeab20f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d06fcfd39b3fbd6c769eaefecb47afeb543173243258fcf76a17d219da4f2ad6,PodSandboxId:66f4764fbff897d16e10a89a56a33bde2486c315a561e8d9085731c7739aee88,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733431027074517693,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fmcnh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb
6a91c8-af65-4fb6-af77-0a6c45d224a7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6376a359c82bf49229c09bb6cdcea5e1f9805707a7170dc09f462f3387283518,PodSandboxId:9d9ec7600f03c33be405dc2c489181ae902a5cfceea04e1efa0dd1cb864461b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733431026478333480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q8thq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be5b50a-e564-4d80-82c4-357db41a3c1e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618e7986042ae61241a2e82d3d6c8bcefb90838bb7c71055021020555e0b3299,PodSandboxId:35edf78f25c164581f5fe53dc04477d4f7b5f86a809070c06c6e8a6195e80344,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733431015687776718,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8680bb881144cc526df7f123fe0e95,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4174cf957b5e115c096629210df81f40bc1558a5af8b9dd0145a6eff3e4be3f9,PodSandboxId:84131f77c9d95245453f83e0492a9caf58e571c574c74cdcfabf14f033e3d065,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733431015698339930,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa5f7ec329fe85df7db1b6e2f2e8ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7450815c261e8981559e06736aa54abfbc808b74e77d643fb144e294aa664284,PodSandboxId:b7277a915edeb0280426b492ba4ac082dfb03f2c3487f931267e7922d51923e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733431015670986045,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506e81d12c5f83cd43b2eff2f0c3d34c,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161f5440479a346e1b4482f9f909e116f60da19890fd7b1635ef87164a1978fa,PodSandboxId:e91f3f0c1dbac00ec563b1a5bee614262901ddf614869a170c805a03422d225e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733431015562416640,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e666b83de89497cad0416a7019a3f69,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=245c649e-f2f2-439b-947e-f3d8a8ed12b8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:46:16 no-preload-816185 crio[716]: time="2024-12-05 20:46:16.959471664Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9a4b6ba1-0de3-4b47-a283-2f34a6ba7bad name=/runtime.v1.RuntimeService/Version
	Dec 05 20:46:16 no-preload-816185 crio[716]: time="2024-12-05 20:46:16.959583348Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9a4b6ba1-0de3-4b47-a283-2f34a6ba7bad name=/runtime.v1.RuntimeService/Version
	Dec 05 20:46:16 no-preload-816185 crio[716]: time="2024-12-05 20:46:16.961188486Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=66f9afb1-2b6c-499d-a9c8-126a83357301 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:46:16 no-preload-816185 crio[716]: time="2024-12-05 20:46:16.961681995Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431576961651865,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=66f9afb1-2b6c-499d-a9c8-126a83357301 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:46:16 no-preload-816185 crio[716]: time="2024-12-05 20:46:16.962408321Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=932d4190-7292-46f0-8e09-544016ddaeef name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:46:16 no-preload-816185 crio[716]: time="2024-12-05 20:46:16.962506124Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=932d4190-7292-46f0-8e09-544016ddaeef name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:46:16 no-preload-816185 crio[716]: time="2024-12-05 20:46:16.962770529Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92c0f24978e39c68485fd113a97875a68cbb55f2f341d2ed2baf5b273a694d58,PodSandboxId:931006d780b13a9308d6d2327c9b419e91bffeb3de9e5935cbe02b0851d15e4e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733431027656385078,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f33e249-9330-428f-8feb-9f3cf44369be,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ae48ce56e049c518439e1342b38f928063d63a4a96344c4ef2c1bcca644865,PodSandboxId:53a99c02594aa0576b861bf3e66787ace2d67583a1f434a0f7d096ec8b3759d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733431027162024700,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gmc2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bfc0f96-5ad3-42c7-ab2c-4a29cbeab20f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d06fcfd39b3fbd6c769eaefecb47afeb543173243258fcf76a17d219da4f2ad6,PodSandboxId:66f4764fbff897d16e10a89a56a33bde2486c315a561e8d9085731c7739aee88,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733431027074517693,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fmcnh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb
6a91c8-af65-4fb6-af77-0a6c45d224a7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6376a359c82bf49229c09bb6cdcea5e1f9805707a7170dc09f462f3387283518,PodSandboxId:9d9ec7600f03c33be405dc2c489181ae902a5cfceea04e1efa0dd1cb864461b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733431026478333480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q8thq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be5b50a-e564-4d80-82c4-357db41a3c1e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618e7986042ae61241a2e82d3d6c8bcefb90838bb7c71055021020555e0b3299,PodSandboxId:35edf78f25c164581f5fe53dc04477d4f7b5f86a809070c06c6e8a6195e80344,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733431015687776718,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8680bb881144cc526df7f123fe0e95,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4174cf957b5e115c096629210df81f40bc1558a5af8b9dd0145a6eff3e4be3f9,PodSandboxId:84131f77c9d95245453f83e0492a9caf58e571c574c74cdcfabf14f033e3d065,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733431015698339930,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa5f7ec329fe85df7db1b6e2f2e8ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7450815c261e8981559e06736aa54abfbc808b74e77d643fb144e294aa664284,PodSandboxId:b7277a915edeb0280426b492ba4ac082dfb03f2c3487f931267e7922d51923e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733431015670986045,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506e81d12c5f83cd43b2eff2f0c3d34c,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161f5440479a346e1b4482f9f909e116f60da19890fd7b1635ef87164a1978fa,PodSandboxId:e91f3f0c1dbac00ec563b1a5bee614262901ddf614869a170c805a03422d225e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733431015562416640,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e666b83de89497cad0416a7019a3f69,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecd63676c708063caf41eb906794e14fa58acf17026504bce946f0a33f379e64,PodSandboxId:ea53df1bd26635b77439dfd8964fe32893903a6a261115b69cea74ec25ab65ac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733430727274991857,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8680bb881144cc526df7f123fe0e95,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=932d4190-7292-46f0-8e09-544016ddaeef name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:46:17 no-preload-816185 crio[716]: time="2024-12-05 20:46:17.009532042Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1d373b52-e8d0-43df-ad49-a52f9cf02f95 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:46:17 no-preload-816185 crio[716]: time="2024-12-05 20:46:17.009605380Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1d373b52-e8d0-43df-ad49-a52f9cf02f95 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:46:17 no-preload-816185 crio[716]: time="2024-12-05 20:46:17.011172107Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4189172e-6052-4763-b78e-97fba732bfb0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:46:17 no-preload-816185 crio[716]: time="2024-12-05 20:46:17.011525522Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431577011503853,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4189172e-6052-4763-b78e-97fba732bfb0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:46:17 no-preload-816185 crio[716]: time="2024-12-05 20:46:17.012349252Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1eb7a643-093c-4bc3-a365-e19158e852f8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:46:17 no-preload-816185 crio[716]: time="2024-12-05 20:46:17.012401900Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1eb7a643-093c-4bc3-a365-e19158e852f8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:46:17 no-preload-816185 crio[716]: time="2024-12-05 20:46:17.012604434Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92c0f24978e39c68485fd113a97875a68cbb55f2f341d2ed2baf5b273a694d58,PodSandboxId:931006d780b13a9308d6d2327c9b419e91bffeb3de9e5935cbe02b0851d15e4e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733431027656385078,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f33e249-9330-428f-8feb-9f3cf44369be,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ae48ce56e049c518439e1342b38f928063d63a4a96344c4ef2c1bcca644865,PodSandboxId:53a99c02594aa0576b861bf3e66787ace2d67583a1f434a0f7d096ec8b3759d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733431027162024700,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gmc2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bfc0f96-5ad3-42c7-ab2c-4a29cbeab20f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d06fcfd39b3fbd6c769eaefecb47afeb543173243258fcf76a17d219da4f2ad6,PodSandboxId:66f4764fbff897d16e10a89a56a33bde2486c315a561e8d9085731c7739aee88,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733431027074517693,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fmcnh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb
6a91c8-af65-4fb6-af77-0a6c45d224a7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6376a359c82bf49229c09bb6cdcea5e1f9805707a7170dc09f462f3387283518,PodSandboxId:9d9ec7600f03c33be405dc2c489181ae902a5cfceea04e1efa0dd1cb864461b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733431026478333480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q8thq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be5b50a-e564-4d80-82c4-357db41a3c1e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618e7986042ae61241a2e82d3d6c8bcefb90838bb7c71055021020555e0b3299,PodSandboxId:35edf78f25c164581f5fe53dc04477d4f7b5f86a809070c06c6e8a6195e80344,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733431015687776718,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8680bb881144cc526df7f123fe0e95,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4174cf957b5e115c096629210df81f40bc1558a5af8b9dd0145a6eff3e4be3f9,PodSandboxId:84131f77c9d95245453f83e0492a9caf58e571c574c74cdcfabf14f033e3d065,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733431015698339930,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa5f7ec329fe85df7db1b6e2f2e8ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7450815c261e8981559e06736aa54abfbc808b74e77d643fb144e294aa664284,PodSandboxId:b7277a915edeb0280426b492ba4ac082dfb03f2c3487f931267e7922d51923e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733431015670986045,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506e81d12c5f83cd43b2eff2f0c3d34c,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161f5440479a346e1b4482f9f909e116f60da19890fd7b1635ef87164a1978fa,PodSandboxId:e91f3f0c1dbac00ec563b1a5bee614262901ddf614869a170c805a03422d225e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733431015562416640,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e666b83de89497cad0416a7019a3f69,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecd63676c708063caf41eb906794e14fa58acf17026504bce946f0a33f379e64,PodSandboxId:ea53df1bd26635b77439dfd8964fe32893903a6a261115b69cea74ec25ab65ac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733430727274991857,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8680bb881144cc526df7f123fe0e95,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1eb7a643-093c-4bc3-a365-e19158e852f8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:46:17 no-preload-816185 crio[716]: time="2024-12-05 20:46:17.048202082Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=20a772cc-9878-4f33-9818-063e4339bd24 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:46:17 no-preload-816185 crio[716]: time="2024-12-05 20:46:17.048274972Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=20a772cc-9878-4f33-9818-063e4339bd24 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:46:17 no-preload-816185 crio[716]: time="2024-12-05 20:46:17.050183366Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6ce9769a-0d34-4e3e-a328-4c8cf01881f2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:46:17 no-preload-816185 crio[716]: time="2024-12-05 20:46:17.051161002Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431577051024144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ce9769a-0d34-4e3e-a328-4c8cf01881f2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:46:17 no-preload-816185 crio[716]: time="2024-12-05 20:46:17.052579926Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b91fae0a-5b62-4eaf-88e9-b6efd0e44da3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:46:17 no-preload-816185 crio[716]: time="2024-12-05 20:46:17.052668067Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b91fae0a-5b62-4eaf-88e9-b6efd0e44da3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:46:17 no-preload-816185 crio[716]: time="2024-12-05 20:46:17.052987866Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92c0f24978e39c68485fd113a97875a68cbb55f2f341d2ed2baf5b273a694d58,PodSandboxId:931006d780b13a9308d6d2327c9b419e91bffeb3de9e5935cbe02b0851d15e4e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733431027656385078,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f33e249-9330-428f-8feb-9f3cf44369be,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ae48ce56e049c518439e1342b38f928063d63a4a96344c4ef2c1bcca644865,PodSandboxId:53a99c02594aa0576b861bf3e66787ace2d67583a1f434a0f7d096ec8b3759d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733431027162024700,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gmc2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bfc0f96-5ad3-42c7-ab2c-4a29cbeab20f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d06fcfd39b3fbd6c769eaefecb47afeb543173243258fcf76a17d219da4f2ad6,PodSandboxId:66f4764fbff897d16e10a89a56a33bde2486c315a561e8d9085731c7739aee88,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733431027074517693,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fmcnh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb
6a91c8-af65-4fb6-af77-0a6c45d224a7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6376a359c82bf49229c09bb6cdcea5e1f9805707a7170dc09f462f3387283518,PodSandboxId:9d9ec7600f03c33be405dc2c489181ae902a5cfceea04e1efa0dd1cb864461b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733431026478333480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q8thq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be5b50a-e564-4d80-82c4-357db41a3c1e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618e7986042ae61241a2e82d3d6c8bcefb90838bb7c71055021020555e0b3299,PodSandboxId:35edf78f25c164581f5fe53dc04477d4f7b5f86a809070c06c6e8a6195e80344,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733431015687776718,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8680bb881144cc526df7f123fe0e95,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4174cf957b5e115c096629210df81f40bc1558a5af8b9dd0145a6eff3e4be3f9,PodSandboxId:84131f77c9d95245453f83e0492a9caf58e571c574c74cdcfabf14f033e3d065,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733431015698339930,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa5f7ec329fe85df7db1b6e2f2e8ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7450815c261e8981559e06736aa54abfbc808b74e77d643fb144e294aa664284,PodSandboxId:b7277a915edeb0280426b492ba4ac082dfb03f2c3487f931267e7922d51923e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733431015670986045,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506e81d12c5f83cd43b2eff2f0c3d34c,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161f5440479a346e1b4482f9f909e116f60da19890fd7b1635ef87164a1978fa,PodSandboxId:e91f3f0c1dbac00ec563b1a5bee614262901ddf614869a170c805a03422d225e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733431015562416640,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e666b83de89497cad0416a7019a3f69,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecd63676c708063caf41eb906794e14fa58acf17026504bce946f0a33f379e64,PodSandboxId:ea53df1bd26635b77439dfd8964fe32893903a6a261115b69cea74ec25ab65ac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733430727274991857,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8680bb881144cc526df7f123fe0e95,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b91fae0a-5b62-4eaf-88e9-b6efd0e44da3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	92c0f24978e39       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   931006d780b13       storage-provisioner
	f4ae48ce56e04       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   53a99c02594aa       coredns-7c65d6cfc9-gmc2j
	d06fcfd39b3fb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   66f4764fbff89       coredns-7c65d6cfc9-fmcnh
	6376a359c82bf       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   9 minutes ago       Running             kube-proxy                0                   9d9ec7600f03c       kube-proxy-q8thq
	4174cf957b5e1       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   9 minutes ago       Running             kube-scheduler            2                   84131f77c9d95       kube-scheduler-no-preload-816185
	618e7986042ae       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   9 minutes ago       Running             kube-apiserver            2                   35edf78f25c16       kube-apiserver-no-preload-816185
	7450815c261e8       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   9 minutes ago       Running             kube-controller-manager   2                   b7277a915edeb       kube-controller-manager-no-preload-816185
	161f5440479a3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   e91f3f0c1dbac       etcd-no-preload-816185
	ecd63676c7080       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   14 minutes ago      Exited              kube-apiserver            1                   ea53df1bd2663       kube-apiserver-no-preload-816185
	
	
	==> coredns [d06fcfd39b3fbd6c769eaefecb47afeb543173243258fcf76a17d219da4f2ad6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [f4ae48ce56e049c518439e1342b38f928063d63a4a96344c4ef2c1bcca644865] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-816185
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-816185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=no-preload-816185
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T20_37_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 20:36:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-816185
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 20:46:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 20:42:16 +0000   Thu, 05 Dec 2024 20:36:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 20:42:16 +0000   Thu, 05 Dec 2024 20:36:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 20:42:16 +0000   Thu, 05 Dec 2024 20:36:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 20:42:16 +0000   Thu, 05 Dec 2024 20:36:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.37
	  Hostname:    no-preload-816185
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2460e8cea62d4fb59d491e8972590e87
	  System UUID:                2460e8ce-a62d-4fb5-9d49-1e8972590e87
	  Boot ID:                    0830dec6-1ea9-489d-962f-e22d48911390
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-fmcnh                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 coredns-7c65d6cfc9-gmc2j                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 etcd-no-preload-816185                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m16s
	  kube-system                 kube-apiserver-no-preload-816185             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 kube-controller-manager-no-preload-816185    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 kube-proxy-q8thq                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-scheduler-no-preload-816185             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 metrics-server-6867b74b74-8vmd6              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m11s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m9s   kube-proxy       
	  Normal  Starting                 9m17s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m16s  kubelet          Node no-preload-816185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m16s  kubelet          Node no-preload-816185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m16s  kubelet          Node no-preload-816185 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m12s  node-controller  Node no-preload-816185 event: Registered Node no-preload-816185 in Controller
	
	
	==> dmesg <==
	[  +0.045078] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.207107] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.892115] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.643364] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.014661] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.059569] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064310] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.194812] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.149612] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.293826] systemd-fstab-generator[707]: Ignoring "noauto" option for root device
	[Dec 5 20:32] systemd-fstab-generator[1317]: Ignoring "noauto" option for root device
	[  +0.061754] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.805619] systemd-fstab-generator[1438]: Ignoring "noauto" option for root device
	[  +4.499754] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.046674] kauditd_printk_skb: 79 callbacks suppressed
	[ +27.038300] kauditd_printk_skb: 6 callbacks suppressed
	[Dec 5 20:36] kauditd_printk_skb: 4 callbacks suppressed
	[ +11.712641] systemd-fstab-generator[3136]: Ignoring "noauto" option for root device
	[  +6.074315] systemd-fstab-generator[3465]: Ignoring "noauto" option for root device
	[Dec 5 20:37] kauditd_printk_skb: 56 callbacks suppressed
	[  +4.801087] systemd-fstab-generator[3589]: Ignoring "noauto" option for root device
	[  +0.853578] kauditd_printk_skb: 36 callbacks suppressed
	[  +7.370342] kauditd_printk_skb: 62 callbacks suppressed
	
	
	==> etcd [161f5440479a346e1b4482f9f909e116f60da19890fd7b1635ef87164a1978fa] <==
	{"level":"info","ts":"2024-12-05T20:36:56.048753Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-05T20:36:56.049390Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"32539c5013f3ec41","initial-advertise-peer-urls":["https://192.168.61.37:2380"],"listen-peer-urls":["https://192.168.61.37:2380"],"advertise-client-urls":["https://192.168.61.37:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.37:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-05T20:36:56.049569Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-05T20:36:56.049773Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.37:2380"}
	{"level":"info","ts":"2024-12-05T20:36:56.049876Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.37:2380"}
	{"level":"info","ts":"2024-12-05T20:36:56.483904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32539c5013f3ec41 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-05T20:36:56.483998Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32539c5013f3ec41 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-05T20:36:56.484049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32539c5013f3ec41 received MsgPreVoteResp from 32539c5013f3ec41 at term 1"}
	{"level":"info","ts":"2024-12-05T20:36:56.484085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32539c5013f3ec41 became candidate at term 2"}
	{"level":"info","ts":"2024-12-05T20:36:56.484109Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32539c5013f3ec41 received MsgVoteResp from 32539c5013f3ec41 at term 2"}
	{"level":"info","ts":"2024-12-05T20:36:56.484136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32539c5013f3ec41 became leader at term 2"}
	{"level":"info","ts":"2024-12-05T20:36:56.484162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 32539c5013f3ec41 elected leader 32539c5013f3ec41 at term 2"}
	{"level":"info","ts":"2024-12-05T20:36:56.488117Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"32539c5013f3ec41","local-member-attributes":"{Name:no-preload-816185 ClientURLs:[https://192.168.61.37:2379]}","request-path":"/0/members/32539c5013f3ec41/attributes","cluster-id":"ee6bec4ef8ef7744","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-05T20:36:56.488338Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T20:36:56.493936Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T20:36:56.494372Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T20:36:56.496122Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-05T20:36:56.499015Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-05T20:36:56.498528Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T20:36:56.499781Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.37:2379"}
	{"level":"info","ts":"2024-12-05T20:36:56.500117Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ee6bec4ef8ef7744","local-member-id":"32539c5013f3ec41","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T20:36:56.505170Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T20:36:56.505272Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T20:36:56.512336Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T20:36:56.517085Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:46:17 up 14 min,  0 users,  load average: 0.11, 0.15, 0.14
	Linux no-preload-816185 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [618e7986042ae61241a2e82d3d6c8bcefb90838bb7c71055021020555e0b3299] <==
	W1205 20:41:59.155594       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 20:41:59.155717       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1205 20:41:59.156733       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 20:41:59.156762       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 20:42:59.157437       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 20:42:59.157647       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1205 20:42:59.157698       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 20:42:59.157729       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1205 20:42:59.158843       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 20:42:59.158897       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 20:44:59.159254       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 20:44:59.159426       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1205 20:44:59.159595       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 20:44:59.159671       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1205 20:44:59.160887       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 20:44:59.160997       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [ecd63676c708063caf41eb906794e14fa58acf17026504bce946f0a33f379e64] <==
	W1205 20:36:48.209017       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:50.576909       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:50.947091       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:51.454426       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:51.578238       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:51.786247       1 logging.go:55] [core] [Channel #16 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:51.799935       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:51.983349       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:52.083885       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:52.280212       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:52.302166       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:52.376576       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:52.394316       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:52.410109       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:52.423938       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:52.488368       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:52.794747       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:52.814263       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:52.898555       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:53.002731       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:53.018683       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:53.020093       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:53.036037       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:53.093178       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:53.095656       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [7450815c261e8981559e06736aa54abfbc808b74e77d643fb144e294aa664284] <==
	E1205 20:41:05.137493       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:41:05.605895       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:41:35.145281       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:41:35.614402       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:42:05.152013       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:42:05.622577       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 20:42:16.656702       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-816185"
	E1205 20:42:35.159688       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:42:35.631414       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 20:42:53.938722       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="493.895µs"
	E1205 20:43:05.166385       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:43:05.640568       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 20:43:05.933333       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="57.675µs"
	E1205 20:43:35.172948       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:43:35.652430       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:44:05.179782       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:44:05.660602       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:44:35.189362       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:44:35.669676       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:45:05.196488       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:45:05.679312       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:45:35.202709       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:45:35.691099       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:46:05.209030       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:46:05.701074       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [6376a359c82bf49229c09bb6cdcea5e1f9805707a7170dc09f462f3387283518] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 20:37:07.331870       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1205 20:37:07.368405       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.37"]
	E1205 20:37:07.376336       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 20:37:07.582178       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 20:37:07.582221       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 20:37:07.582253       1 server_linux.go:169] "Using iptables Proxier"
	I1205 20:37:07.585614       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 20:37:07.586133       1 server.go:483] "Version info" version="v1.31.2"
	I1205 20:37:07.586342       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:37:07.587739       1 config.go:199] "Starting service config controller"
	I1205 20:37:07.587798       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 20:37:07.587947       1 config.go:105] "Starting endpoint slice config controller"
	I1205 20:37:07.588043       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 20:37:07.588561       1 config.go:328] "Starting node config controller"
	I1205 20:37:07.588633       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 20:37:07.688491       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 20:37:07.688678       1 shared_informer.go:320] Caches are synced for service config
	I1205 20:37:07.688692       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4174cf957b5e115c096629210df81f40bc1558a5af8b9dd0145a6eff3e4be3f9] <==
	W1205 20:36:58.211088       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 20:36:58.212788       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:36:59.112415       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 20:36:59.112484       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1205 20:36:59.176506       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 20:36:59.176565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:36:59.288043       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 20:36:59.288101       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:36:59.303975       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 20:36:59.304027       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:36:59.336035       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 20:36:59.336089       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:36:59.340999       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 20:36:59.341049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:36:59.473292       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1205 20:36:59.473390       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1205 20:36:59.479286       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1205 20:36:59.479376       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:36:59.497573       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1205 20:36:59.498277       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:36:59.541581       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 20:36:59.542482       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:36:59.565563       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 20:36:59.565691       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1205 20:37:01.696505       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 05 20:45:11 no-preload-816185 kubelet[3472]: E1205 20:45:11.026963    3472 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431511026650240,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:45:11 no-preload-816185 kubelet[3472]: E1205 20:45:11.026986    3472 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431511026650240,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:45:11 no-preload-816185 kubelet[3472]: E1205 20:45:11.914233    3472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vmd6" podUID="d838e6e3-bd74-4653-9289-4f5375b03d4f"
	Dec 05 20:45:21 no-preload-816185 kubelet[3472]: E1205 20:45:21.029609    3472 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431521028373159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:45:21 no-preload-816185 kubelet[3472]: E1205 20:45:21.030345    3472 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431521028373159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:45:22 no-preload-816185 kubelet[3472]: E1205 20:45:22.914509    3472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vmd6" podUID="d838e6e3-bd74-4653-9289-4f5375b03d4f"
	Dec 05 20:45:31 no-preload-816185 kubelet[3472]: E1205 20:45:31.034068    3472 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431531033722631,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:45:31 no-preload-816185 kubelet[3472]: E1205 20:45:31.034112    3472 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431531033722631,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:45:34 no-preload-816185 kubelet[3472]: E1205 20:45:34.915129    3472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vmd6" podUID="d838e6e3-bd74-4653-9289-4f5375b03d4f"
	Dec 05 20:45:41 no-preload-816185 kubelet[3472]: E1205 20:45:41.037254    3472 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431541036484923,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:45:41 no-preload-816185 kubelet[3472]: E1205 20:45:41.037878    3472 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431541036484923,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:45:46 no-preload-816185 kubelet[3472]: E1205 20:45:46.915371    3472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vmd6" podUID="d838e6e3-bd74-4653-9289-4f5375b03d4f"
	Dec 05 20:45:51 no-preload-816185 kubelet[3472]: E1205 20:45:51.039568    3472 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431551039220776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:45:51 no-preload-816185 kubelet[3472]: E1205 20:45:51.039592    3472 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431551039220776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:45:58 no-preload-816185 kubelet[3472]: E1205 20:45:58.914738    3472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vmd6" podUID="d838e6e3-bd74-4653-9289-4f5375b03d4f"
	Dec 05 20:46:00 no-preload-816185 kubelet[3472]: E1205 20:46:00.946148    3472 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 05 20:46:00 no-preload-816185 kubelet[3472]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 05 20:46:00 no-preload-816185 kubelet[3472]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 20:46:00 no-preload-816185 kubelet[3472]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 20:46:00 no-preload-816185 kubelet[3472]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 20:46:01 no-preload-816185 kubelet[3472]: E1205 20:46:01.040913    3472 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431561040385242,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:46:01 no-preload-816185 kubelet[3472]: E1205 20:46:01.040972    3472 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431561040385242,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:46:11 no-preload-816185 kubelet[3472]: E1205 20:46:11.043147    3472 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431571042317571,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:46:11 no-preload-816185 kubelet[3472]: E1205 20:46:11.043665    3472 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431571042317571,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:46:11 no-preload-816185 kubelet[3472]: E1205 20:46:11.914306    3472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vmd6" podUID="d838e6e3-bd74-4653-9289-4f5375b03d4f"
	
	
	==> storage-provisioner [92c0f24978e39c68485fd113a97875a68cbb55f2f341d2ed2baf5b273a694d58] <==
	I1205 20:37:07.792160       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 20:37:07.803702       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 20:37:07.805281       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 20:37:07.818153       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 20:37:07.818380       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-816185_33b20623-4a0d-43c8-856d-b94d6915ca61!
	I1205 20:37:07.819504       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3280737f-4498-47b2-a755-b949acc1ab4b", APIVersion:"v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-816185_33b20623-4a0d-43c8-856d-b94d6915ca61 became leader
	I1205 20:37:07.919372       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-816185_33b20623-4a0d-43c8-856d-b94d6915ca61!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-816185 -n no-preload-816185
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-816185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-8vmd6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-816185 describe pod metrics-server-6867b74b74-8vmd6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-816185 describe pod metrics-server-6867b74b74-8vmd6: exit status 1 (67.65762ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-8vmd6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-816185 describe pod metrics-server-6867b74b74-8vmd6: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
E1205 20:40:51.381741  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
E1205 20:43:15.011822  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
E1205 20:45:51.381695  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
E1205 20:48:15.012094  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-386085 -n old-k8s-version-386085
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-386085 -n old-k8s-version-386085: exit status 2 (243.211063ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-386085" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386085 -n old-k8s-version-386085
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386085 -n old-k8s-version-386085: exit status 2 (240.844974ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-386085 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-386085 logs -n 25: (1.592356929s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-790679 -- sudo                         | cert-options-790679          | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:21 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-790679                                 | cert-options-790679          | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:21 UTC |
	| start   | -p no-preload-816185                                   | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-886958                           | kubernetes-upgrade-886958    | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:21 UTC |
	| start   | -p embed-certs-789000                                  | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-816185             | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-816185                                   | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-789000            | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-789000                                  | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-315387                              | cert-expiration-315387       | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-315387                              | cert-expiration-315387       | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	| delete  | -p                                                     | disable-driver-mounts-242147 | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	|         | disable-driver-mounts-242147                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:25 UTC |
	|         | default-k8s-diff-port-942599                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-386085        | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-942599  | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC | 05 Dec 24 20:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC |                     |
	|         | default-k8s-diff-port-942599                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-816185                  | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-789000                 | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-816185                                   | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC | 05 Dec 24 20:37 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-789000                                  | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC | 05 Dec 24 20:35 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-386085                              | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-386085             | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-386085                              | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-942599       | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:28 UTC | 05 Dec 24 20:36 UTC |
	|         | default-k8s-diff-port-942599                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 20:28:03
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:28:03.038037  585929 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:28:03.038168  585929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:28:03.038178  585929 out.go:358] Setting ErrFile to fd 2...
	I1205 20:28:03.038185  585929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:28:03.038375  585929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 20:28:03.038955  585929 out.go:352] Setting JSON to false
	I1205 20:28:03.039948  585929 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":11429,"bootTime":1733419054,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:28:03.040015  585929 start.go:139] virtualization: kvm guest
	I1205 20:28:03.042326  585929 out.go:177] * [default-k8s-diff-port-942599] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:28:03.044291  585929 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 20:28:03.044320  585929 notify.go:220] Checking for updates...
	I1205 20:28:03.047072  585929 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:28:03.048480  585929 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:28:03.049796  585929 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 20:28:03.051035  585929 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:28:03.052263  585929 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:28:03.054167  585929 config.go:182] Loaded profile config "default-k8s-diff-port-942599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:28:03.054665  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:28:03.054749  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:28:03.070361  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33501
	I1205 20:28:03.070891  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:28:03.071534  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:28:03.071563  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:28:03.071995  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:28:03.072285  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:28:03.072587  585929 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:28:03.072920  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:28:03.072968  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:28:03.088186  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38669
	I1205 20:28:03.088660  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:28:03.089202  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:28:03.089224  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:28:03.089542  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:28:03.089782  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:28:03.122562  585929 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 20:28:03.123970  585929 start.go:297] selected driver: kvm2
	I1205 20:28:03.123992  585929 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-942599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-942599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.96 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:28:03.124128  585929 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:28:03.125014  585929 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:28:03.125111  585929 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20052-530897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:28:03.140461  585929 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 20:28:03.140904  585929 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:28:03.140943  585929 cni.go:84] Creating CNI manager for ""
	I1205 20:28:03.141015  585929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:28:03.141067  585929 start.go:340] cluster config:
	{Name:default-k8s-diff-port-942599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-942599 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.96 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:28:03.141179  585929 iso.go:125] acquiring lock: {Name:mk778929df466edaca8cb6d38427acedfae32b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:28:03.144215  585929 out.go:177] * Starting "default-k8s-diff-port-942599" primary control-plane node in "default-k8s-diff-port-942599" cluster
	I1205 20:28:03.276565  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:03.145620  585929 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:28:03.145661  585929 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 20:28:03.145676  585929 cache.go:56] Caching tarball of preloaded images
	I1205 20:28:03.145844  585929 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:28:03.145864  585929 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 20:28:03.146005  585929 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/config.json ...
	I1205 20:28:03.146240  585929 start.go:360] acquireMachinesLock for default-k8s-diff-port-942599: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:28:06.348547  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:12.428620  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:15.500614  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:21.580587  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:24.652618  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:30.732598  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:33.804612  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:39.884624  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:42.956577  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:49.036617  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:52.108607  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:58.188605  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:01.260573  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:07.340591  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:10.412578  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:16.492574  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:19.564578  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:25.644591  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:28.716619  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:34.796609  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:37.868605  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:43.948594  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:47.020553  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:53.100499  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:56.172560  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:02.252612  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:05.324648  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:11.404563  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:14.476553  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:20.556568  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:23.561620  585113 start.go:364] duration metric: took 4m32.790399884s to acquireMachinesLock for "embed-certs-789000"
	I1205 20:30:23.561696  585113 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:30:23.561711  585113 fix.go:54] fixHost starting: 
	I1205 20:30:23.562327  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:30:23.562400  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:30:23.578260  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38555
	I1205 20:30:23.578843  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:30:23.579379  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:30:23.579405  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:30:23.579776  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:30:23.580051  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:23.580222  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetState
	I1205 20:30:23.582161  585113 fix.go:112] recreateIfNeeded on embed-certs-789000: state=Stopped err=<nil>
	I1205 20:30:23.582190  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	W1205 20:30:23.582386  585113 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 20:30:23.584585  585113 out.go:177] * Restarting existing kvm2 VM for "embed-certs-789000" ...
	I1205 20:30:23.586583  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Start
	I1205 20:30:23.586835  585113 main.go:141] libmachine: (embed-certs-789000) Ensuring networks are active...
	I1205 20:30:23.587628  585113 main.go:141] libmachine: (embed-certs-789000) Ensuring network default is active
	I1205 20:30:23.587937  585113 main.go:141] libmachine: (embed-certs-789000) Ensuring network mk-embed-certs-789000 is active
	I1205 20:30:23.588228  585113 main.go:141] libmachine: (embed-certs-789000) Getting domain xml...
	I1205 20:30:23.588898  585113 main.go:141] libmachine: (embed-certs-789000) Creating domain...
	I1205 20:30:24.829936  585113 main.go:141] libmachine: (embed-certs-789000) Waiting to get IP...
	I1205 20:30:24.830897  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:24.831398  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:24.831465  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:24.831364  586433 retry.go:31] will retry after 208.795355ms: waiting for machine to come up
	I1205 20:30:25.042078  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:25.042657  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:25.042689  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:25.042599  586433 retry.go:31] will retry after 385.313968ms: waiting for machine to come up
	I1205 20:30:25.429439  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:25.429877  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:25.429913  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:25.429811  586433 retry.go:31] will retry after 432.591358ms: waiting for machine to come up
	I1205 20:30:23.558453  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:30:23.558508  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetMachineName
	I1205 20:30:23.558905  585025 buildroot.go:166] provisioning hostname "no-preload-816185"
	I1205 20:30:23.558943  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetMachineName
	I1205 20:30:23.559166  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:30:23.561471  585025 machine.go:96] duration metric: took 4m37.380964872s to provisionDockerMachine
	I1205 20:30:23.561518  585025 fix.go:56] duration metric: took 4m37.403172024s for fixHost
	I1205 20:30:23.561524  585025 start.go:83] releasing machines lock for "no-preload-816185", held for 4m37.40319095s
	W1205 20:30:23.561546  585025 start.go:714] error starting host: provision: host is not running
	W1205 20:30:23.561677  585025 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1205 20:30:23.561688  585025 start.go:729] Will try again in 5 seconds ...
	I1205 20:30:25.864656  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:25.865217  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:25.865255  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:25.865138  586433 retry.go:31] will retry after 571.148349ms: waiting for machine to come up
	I1205 20:30:26.437644  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:26.438220  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:26.438250  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:26.438165  586433 retry.go:31] will retry after 585.234455ms: waiting for machine to come up
	I1205 20:30:27.025107  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:27.025510  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:27.025538  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:27.025459  586433 retry.go:31] will retry after 648.291531ms: waiting for machine to come up
	I1205 20:30:27.675457  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:27.675898  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:27.675928  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:27.675838  586433 retry.go:31] will retry after 804.071148ms: waiting for machine to come up
	I1205 20:30:28.481966  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:28.482386  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:28.482416  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:28.482329  586433 retry.go:31] will retry after 905.207403ms: waiting for machine to come up
	I1205 20:30:29.388933  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:29.389546  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:29.389571  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:29.389484  586433 retry.go:31] will retry after 1.48894232s: waiting for machine to come up
	I1205 20:30:28.562678  585025 start.go:360] acquireMachinesLock for no-preload-816185: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:30:30.880218  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:30.880742  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:30.880773  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:30.880685  586433 retry.go:31] will retry after 2.314200549s: waiting for machine to come up
	I1205 20:30:33.198477  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:33.198998  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:33.199029  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:33.198945  586433 retry.go:31] will retry after 1.922541264s: waiting for machine to come up
	I1205 20:30:35.123922  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:35.124579  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:35.124607  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:35.124524  586433 retry.go:31] will retry after 3.537087912s: waiting for machine to come up
	I1205 20:30:38.662839  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:38.663212  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:38.663250  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:38.663160  586433 retry.go:31] will retry after 3.371938424s: waiting for machine to come up
	I1205 20:30:43.457332  585602 start.go:364] duration metric: took 3m31.488905557s to acquireMachinesLock for "old-k8s-version-386085"
	I1205 20:30:43.457418  585602 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:30:43.457427  585602 fix.go:54] fixHost starting: 
	I1205 20:30:43.457835  585602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:30:43.457891  585602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:30:43.474845  585602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33571
	I1205 20:30:43.475386  585602 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:30:43.475993  585602 main.go:141] libmachine: Using API Version  1
	I1205 20:30:43.476026  585602 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:30:43.476404  585602 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:30:43.476613  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:30:43.476778  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetState
	I1205 20:30:43.478300  585602 fix.go:112] recreateIfNeeded on old-k8s-version-386085: state=Stopped err=<nil>
	I1205 20:30:43.478329  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	W1205 20:30:43.478502  585602 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 20:30:43.480644  585602 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-386085" ...
	I1205 20:30:42.038738  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.039204  585113 main.go:141] libmachine: (embed-certs-789000) Found IP for machine: 192.168.39.200
	I1205 20:30:42.039235  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has current primary IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.039244  585113 main.go:141] libmachine: (embed-certs-789000) Reserving static IP address...
	I1205 20:30:42.039760  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "embed-certs-789000", mac: "52:54:00:48:ae:b2", ip: "192.168.39.200"} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.039806  585113 main.go:141] libmachine: (embed-certs-789000) DBG | skip adding static IP to network mk-embed-certs-789000 - found existing host DHCP lease matching {name: "embed-certs-789000", mac: "52:54:00:48:ae:b2", ip: "192.168.39.200"}
	I1205 20:30:42.039819  585113 main.go:141] libmachine: (embed-certs-789000) Reserved static IP address: 192.168.39.200
	I1205 20:30:42.039835  585113 main.go:141] libmachine: (embed-certs-789000) Waiting for SSH to be available...
	I1205 20:30:42.039843  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Getting to WaitForSSH function...
	I1205 20:30:42.042013  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.042352  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.042386  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.042542  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Using SSH client type: external
	I1205 20:30:42.042562  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa (-rw-------)
	I1205 20:30:42.042586  585113 main.go:141] libmachine: (embed-certs-789000) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.200 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:30:42.042595  585113 main.go:141] libmachine: (embed-certs-789000) DBG | About to run SSH command:
	I1205 20:30:42.042603  585113 main.go:141] libmachine: (embed-certs-789000) DBG | exit 0
	I1205 20:30:42.168573  585113 main.go:141] libmachine: (embed-certs-789000) DBG | SSH cmd err, output: <nil>: 
	I1205 20:30:42.168960  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetConfigRaw
	I1205 20:30:42.169783  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetIP
	I1205 20:30:42.172396  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.172790  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.172818  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.173023  585113 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/config.json ...
	I1205 20:30:42.173214  585113 machine.go:93] provisionDockerMachine start ...
	I1205 20:30:42.173234  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:42.173465  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.175399  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.175754  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.175785  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.175885  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:42.176063  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.176208  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.176412  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:42.176583  585113 main.go:141] libmachine: Using SSH client type: native
	I1205 20:30:42.176816  585113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1205 20:30:42.176830  585113 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:30:42.280829  585113 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 20:30:42.280861  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetMachineName
	I1205 20:30:42.281135  585113 buildroot.go:166] provisioning hostname "embed-certs-789000"
	I1205 20:30:42.281168  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetMachineName
	I1205 20:30:42.281409  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.284355  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.284692  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.284723  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.284817  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:42.285019  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.285185  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.285338  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:42.285511  585113 main.go:141] libmachine: Using SSH client type: native
	I1205 20:30:42.285716  585113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1205 20:30:42.285730  585113 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-789000 && echo "embed-certs-789000" | sudo tee /etc/hostname
	I1205 20:30:42.409310  585113 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-789000
	
	I1205 20:30:42.409370  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.412182  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.412524  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.412566  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.412779  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:42.412989  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.413137  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.413278  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:42.413468  585113 main.go:141] libmachine: Using SSH client type: native
	I1205 20:30:42.413674  585113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1205 20:30:42.413690  585113 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-789000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-789000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-789000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:30:42.529773  585113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:30:42.529806  585113 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:30:42.529829  585113 buildroot.go:174] setting up certificates
	I1205 20:30:42.529841  585113 provision.go:84] configureAuth start
	I1205 20:30:42.529850  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetMachineName
	I1205 20:30:42.530201  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetIP
	I1205 20:30:42.533115  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.533527  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.533558  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.533753  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.535921  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.536310  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.536339  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.536518  585113 provision.go:143] copyHostCerts
	I1205 20:30:42.536610  585113 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:30:42.536631  585113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:30:42.536698  585113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:30:42.536793  585113 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:30:42.536802  585113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:30:42.536826  585113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:30:42.536880  585113 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:30:42.536887  585113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:30:42.536908  585113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:30:42.536956  585113 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.embed-certs-789000 san=[127.0.0.1 192.168.39.200 embed-certs-789000 localhost minikube]
	I1205 20:30:42.832543  585113 provision.go:177] copyRemoteCerts
	I1205 20:30:42.832610  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:30:42.832640  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.835403  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.835669  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.835701  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.835848  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:42.836027  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.836161  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:42.836314  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:30:42.918661  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:30:42.943903  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1205 20:30:42.968233  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:30:42.993174  585113 provision.go:87] duration metric: took 463.317149ms to configureAuth
	I1205 20:30:42.993249  585113 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:30:42.993449  585113 config.go:182] Loaded profile config "embed-certs-789000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:30:42.993554  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.996211  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.996637  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.996696  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.996841  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:42.997049  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.997196  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.997305  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:42.997458  585113 main.go:141] libmachine: Using SSH client type: native
	I1205 20:30:42.997641  585113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1205 20:30:42.997656  585113 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:30:43.220096  585113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:30:43.220127  585113 machine.go:96] duration metric: took 1.046899757s to provisionDockerMachine
	I1205 20:30:43.220141  585113 start.go:293] postStartSetup for "embed-certs-789000" (driver="kvm2")
	I1205 20:30:43.220152  585113 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:30:43.220176  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:43.220544  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:30:43.220584  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:43.223481  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.223860  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:43.223889  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.224102  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:43.224316  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:43.224483  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:43.224667  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:30:43.307878  585113 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:30:43.312875  585113 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:30:43.312905  585113 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:30:43.312981  585113 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:30:43.313058  585113 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:30:43.313169  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:30:43.323221  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:30:43.347978  585113 start.go:296] duration metric: took 127.819083ms for postStartSetup
	I1205 20:30:43.348023  585113 fix.go:56] duration metric: took 19.786318897s for fixHost
	I1205 20:30:43.348046  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:43.350639  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.351004  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:43.351026  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.351247  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:43.351478  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:43.351642  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:43.351803  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:43.351950  585113 main.go:141] libmachine: Using SSH client type: native
	I1205 20:30:43.352122  585113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1205 20:30:43.352133  585113 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:30:43.457130  585113 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430643.415370749
	
	I1205 20:30:43.457164  585113 fix.go:216] guest clock: 1733430643.415370749
	I1205 20:30:43.457176  585113 fix.go:229] Guest: 2024-12-05 20:30:43.415370749 +0000 UTC Remote: 2024-12-05 20:30:43.34802793 +0000 UTC m=+292.733798952 (delta=67.342819ms)
	I1205 20:30:43.457209  585113 fix.go:200] guest clock delta is within tolerance: 67.342819ms
	I1205 20:30:43.457217  585113 start.go:83] releasing machines lock for "embed-certs-789000", held for 19.895543311s
	I1205 20:30:43.457251  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:43.457563  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetIP
	I1205 20:30:43.460628  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.461002  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:43.461042  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.461175  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:43.461758  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:43.461937  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:43.462067  585113 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:30:43.462120  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:43.462147  585113 ssh_runner.go:195] Run: cat /version.json
	I1205 20:30:43.462169  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:43.464859  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.465147  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.465237  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:43.465264  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.465409  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:43.465472  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:43.465497  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.465589  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:43.465711  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:43.465768  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:43.465863  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:43.465907  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:30:43.466006  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:43.466129  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:30:43.568909  585113 ssh_runner.go:195] Run: systemctl --version
	I1205 20:30:43.575175  585113 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:30:43.725214  585113 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:30:43.732226  585113 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:30:43.732369  585113 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:30:43.750186  585113 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:30:43.750223  585113 start.go:495] detecting cgroup driver to use...
	I1205 20:30:43.750296  585113 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:30:43.767876  585113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:30:43.783386  585113 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:30:43.783465  585113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:30:43.799917  585113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:30:43.815607  585113 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:30:43.935150  585113 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:30:44.094292  585113 docker.go:233] disabling docker service ...
	I1205 20:30:44.094378  585113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:30:44.111307  585113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:30:44.127528  585113 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:30:44.284496  585113 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:30:44.422961  585113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:30:44.439104  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:30:44.461721  585113 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:30:44.461787  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.476398  585113 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:30:44.476463  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.489821  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.502250  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.514245  585113 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:30:44.528227  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.540205  585113 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.559447  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.571434  585113 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:30:44.583635  585113 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:30:44.583717  585113 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:30:44.600954  585113 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:30:44.613381  585113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:30:44.733592  585113 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:30:44.843948  585113 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:30:44.844036  585113 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:30:44.849215  585113 start.go:563] Will wait 60s for crictl version
	I1205 20:30:44.849275  585113 ssh_runner.go:195] Run: which crictl
	I1205 20:30:44.853481  585113 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:30:44.900488  585113 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:30:44.900583  585113 ssh_runner.go:195] Run: crio --version
	I1205 20:30:44.944771  585113 ssh_runner.go:195] Run: crio --version
	I1205 20:30:44.977119  585113 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:30:44.978527  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetIP
	I1205 20:30:44.981609  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:44.982001  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:44.982037  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:44.982240  585113 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:30:44.986979  585113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:30:45.001779  585113 kubeadm.go:883] updating cluster {Name:embed-certs-789000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-789000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:30:45.001935  585113 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:30:45.002021  585113 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:30:45.041827  585113 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 20:30:45.041918  585113 ssh_runner.go:195] Run: which lz4
	I1205 20:30:45.046336  585113 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 20:30:45.050804  585113 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:30:45.050852  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 20:30:43.482307  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .Start
	I1205 20:30:43.482501  585602 main.go:141] libmachine: (old-k8s-version-386085) Ensuring networks are active...
	I1205 20:30:43.483222  585602 main.go:141] libmachine: (old-k8s-version-386085) Ensuring network default is active
	I1205 20:30:43.483574  585602 main.go:141] libmachine: (old-k8s-version-386085) Ensuring network mk-old-k8s-version-386085 is active
	I1205 20:30:43.484156  585602 main.go:141] libmachine: (old-k8s-version-386085) Getting domain xml...
	I1205 20:30:43.485045  585602 main.go:141] libmachine: (old-k8s-version-386085) Creating domain...
	I1205 20:30:44.770817  585602 main.go:141] libmachine: (old-k8s-version-386085) Waiting to get IP...
	I1205 20:30:44.772079  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:44.772538  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:44.772599  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:44.772517  586577 retry.go:31] will retry after 247.056435ms: waiting for machine to come up
	I1205 20:30:45.021096  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:45.021642  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:45.021678  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:45.021560  586577 retry.go:31] will retry after 241.543543ms: waiting for machine to come up
	I1205 20:30:45.265136  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:45.265654  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:45.265683  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:45.265596  586577 retry.go:31] will retry after 324.624293ms: waiting for machine to come up
	I1205 20:30:45.592067  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:45.592603  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:45.592636  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:45.592558  586577 retry.go:31] will retry after 408.275958ms: waiting for machine to come up
	I1205 20:30:46.002321  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:46.002872  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:46.002904  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:46.002808  586577 retry.go:31] will retry after 693.356488ms: waiting for machine to come up
	I1205 20:30:46.697505  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:46.697874  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:46.697900  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:46.697846  586577 retry.go:31] will retry after 906.807324ms: waiting for machine to come up
	I1205 20:30:46.612504  585113 crio.go:462] duration metric: took 1.56620974s to copy over tarball
	I1205 20:30:46.612585  585113 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:30:48.868826  585113 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256202653s)
	I1205 20:30:48.868863  585113 crio.go:469] duration metric: took 2.256329112s to extract the tarball
	I1205 20:30:48.868873  585113 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:30:48.906872  585113 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:30:48.955442  585113 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:30:48.955468  585113 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:30:48.955477  585113 kubeadm.go:934] updating node { 192.168.39.200 8443 v1.31.2 crio true true} ...
	I1205 20:30:48.955603  585113 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-789000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-789000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:30:48.955668  585113 ssh_runner.go:195] Run: crio config
	I1205 20:30:49.007389  585113 cni.go:84] Creating CNI manager for ""
	I1205 20:30:49.007419  585113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:30:49.007433  585113 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:30:49.007473  585113 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.200 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-789000 NodeName:embed-certs-789000 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:30:49.007656  585113 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-789000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.200"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.200"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:30:49.007734  585113 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:30:49.021862  585113 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:30:49.021949  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:30:49.032937  585113 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1205 20:30:49.053311  585113 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:30:49.073636  585113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1205 20:30:49.094437  585113 ssh_runner.go:195] Run: grep 192.168.39.200	control-plane.minikube.internal$ /etc/hosts
	I1205 20:30:49.098470  585113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.200	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:30:49.112013  585113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:30:49.246312  585113 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:30:49.264250  585113 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000 for IP: 192.168.39.200
	I1205 20:30:49.264301  585113 certs.go:194] generating shared ca certs ...
	I1205 20:30:49.264329  585113 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:30:49.264565  585113 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:30:49.264627  585113 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:30:49.264641  585113 certs.go:256] generating profile certs ...
	I1205 20:30:49.264775  585113 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/client.key
	I1205 20:30:49.264854  585113 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/apiserver.key.5c723d79
	I1205 20:30:49.264894  585113 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/proxy-client.key
	I1205 20:30:49.265026  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:30:49.265094  585113 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:30:49.265109  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:30:49.265144  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:30:49.265179  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:30:49.265215  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:30:49.265258  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:30:49.266137  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:30:49.297886  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:30:49.339461  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:30:49.385855  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:30:49.427676  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1205 20:30:49.466359  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:30:49.492535  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:30:49.518311  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:30:49.543545  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:30:49.567956  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:30:49.592361  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:30:49.616245  585113 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:30:49.633947  585113 ssh_runner.go:195] Run: openssl version
	I1205 20:30:49.640353  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:30:49.652467  585113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:30:49.657353  585113 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:30:49.657440  585113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:30:49.664045  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:30:49.679941  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:30:49.695153  585113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:30:49.700397  585113 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:30:49.700458  585113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:30:49.706786  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:30:49.718994  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:30:49.731470  585113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:30:49.736654  585113 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:30:49.736725  585113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:30:49.743034  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:30:49.755334  585113 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:30:49.760378  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:30:49.766942  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:30:49.773911  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:30:49.780556  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:30:49.787004  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:30:49.793473  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:30:49.800009  585113 kubeadm.go:392] StartCluster: {Name:embed-certs-789000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-789000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:30:49.800118  585113 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:30:49.800163  585113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:30:49.844520  585113 cri.go:89] found id: ""
	I1205 20:30:49.844620  585113 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:30:49.857604  585113 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 20:30:49.857640  585113 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 20:30:49.857702  585113 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:30:49.870235  585113 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:30:49.871318  585113 kubeconfig.go:125] found "embed-certs-789000" server: "https://192.168.39.200:8443"
	I1205 20:30:49.873416  585113 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:30:49.884281  585113 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.200
	I1205 20:30:49.884331  585113 kubeadm.go:1160] stopping kube-system containers ...
	I1205 20:30:49.884348  585113 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:30:49.884410  585113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:30:49.930238  585113 cri.go:89] found id: ""
	I1205 20:30:49.930351  585113 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:30:49.947762  585113 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:30:49.957878  585113 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:30:49.957902  585113 kubeadm.go:157] found existing configuration files:
	
	I1205 20:30:49.957960  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:30:49.967261  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:30:49.967342  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:30:49.977868  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:30:49.987715  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:30:49.987777  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:30:49.998157  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:30:50.008224  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:30:50.008334  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:30:50.018748  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:30:50.028204  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:30:50.028287  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:30:50.038459  585113 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:30:50.049458  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:50.175199  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:47.606601  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:47.607065  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:47.607098  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:47.607001  586577 retry.go:31] will retry after 1.007867893s: waiting for machine to come up
	I1205 20:30:48.617140  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:48.617641  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:48.617674  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:48.617608  586577 retry.go:31] will retry after 1.15317606s: waiting for machine to come up
	I1205 20:30:49.773126  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:49.773670  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:49.773699  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:49.773620  586577 retry.go:31] will retry after 1.342422822s: waiting for machine to come up
	I1205 20:30:51.117592  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:51.118034  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:51.118065  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:51.117973  586577 retry.go:31] will retry after 1.575794078s: waiting for machine to come up
	I1205 20:30:51.203131  585113 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.027881984s)
	I1205 20:30:51.203193  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:51.415679  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:51.500984  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:51.598883  585113 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:30:51.598986  585113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:30:52.099206  585113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:30:52.599755  585113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:30:52.619189  585113 api_server.go:72] duration metric: took 1.020303049s to wait for apiserver process to appear ...
	I1205 20:30:52.619236  585113 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:30:52.619268  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:52.619903  585113 api_server.go:269] stopped: https://192.168.39.200:8443/healthz: Get "https://192.168.39.200:8443/healthz": dial tcp 192.168.39.200:8443: connect: connection refused
	I1205 20:30:53.119501  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:55.342363  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:30:55.342398  585113 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:30:55.342418  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:55.471683  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:30:55.471729  585113 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:30:55.619946  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:55.634855  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:30:55.634906  585113 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:30:56.119928  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:56.128358  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:30:56.128396  585113 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:30:56.620047  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:56.625869  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I1205 20:30:56.633658  585113 api_server.go:141] control plane version: v1.31.2
	I1205 20:30:56.633698  585113 api_server.go:131] duration metric: took 4.014451973s to wait for apiserver health ...
	I1205 20:30:56.633712  585113 cni.go:84] Creating CNI manager for ""
	I1205 20:30:56.633721  585113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:30:56.635658  585113 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:30:52.695389  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:52.695838  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:52.695868  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:52.695784  586577 retry.go:31] will retry after 2.377931285s: waiting for machine to come up
	I1205 20:30:55.076859  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:55.077428  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:55.077469  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:55.077377  586577 retry.go:31] will retry after 2.586837249s: waiting for machine to come up
	I1205 20:30:56.637276  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:30:56.649131  585113 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:30:56.670981  585113 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:30:56.682424  585113 system_pods.go:59] 8 kube-system pods found
	I1205 20:30:56.682497  585113 system_pods.go:61] "coredns-7c65d6cfc9-hrrjc" [43d8b550-f29d-4a84-a2fc-b456abc486c2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:30:56.682508  585113 system_pods.go:61] "etcd-embed-certs-789000" [99f232e4-1bc8-4f98-8bcf-8aa61d66158b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:30:56.682519  585113 system_pods.go:61] "kube-apiserver-embed-certs-789000" [d1d11749-0ddc-4172-aaa9-bca00c64c912] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:30:56.682528  585113 system_pods.go:61] "kube-controller-manager-embed-certs-789000" [b291c993-cd10-4d0f-8c3e-a6db726cf83a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:30:56.682536  585113 system_pods.go:61] "kube-proxy-h79dj" [80abe907-24e7-4001-90a6-f4d10fd9fc6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:30:56.682544  585113 system_pods.go:61] "kube-scheduler-embed-certs-789000" [490d7afa-24fd-43c8-8088-539bb7e1eb9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 20:30:56.682556  585113 system_pods.go:61] "metrics-server-6867b74b74-tlsjl" [cd1d73a4-27d1-4e68-b7d8-6da497fc4e53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:30:56.682570  585113 system_pods.go:61] "storage-provisioner" [3246e383-4f15-4222-a50c-c5b243fda12a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:30:56.682579  585113 system_pods.go:74] duration metric: took 11.566899ms to wait for pod list to return data ...
	I1205 20:30:56.682598  585113 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:30:56.687073  585113 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:30:56.687172  585113 node_conditions.go:123] node cpu capacity is 2
	I1205 20:30:56.687222  585113 node_conditions.go:105] duration metric: took 4.613225ms to run NodePressure ...
	I1205 20:30:56.687273  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:56.981686  585113 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 20:30:56.985944  585113 kubeadm.go:739] kubelet initialised
	I1205 20:30:56.985968  585113 kubeadm.go:740] duration metric: took 4.256434ms waiting for restarted kubelet to initialise ...
	I1205 20:30:56.985976  585113 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:30:56.991854  585113 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hrrjc" in "kube-system" namespace to be "Ready" ...
	I1205 20:30:58.997499  585113 pod_ready.go:103] pod "coredns-7c65d6cfc9-hrrjc" in "kube-system" namespace has status "Ready":"False"
	I1205 20:30:57.667200  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:57.667644  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:57.667681  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:57.667592  586577 retry.go:31] will retry after 2.856276116s: waiting for machine to come up
	I1205 20:31:00.525334  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:00.525796  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:31:00.525830  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:31:00.525740  586577 retry.go:31] will retry after 5.119761936s: waiting for machine to come up
	I1205 20:31:00.999102  585113 pod_ready.go:103] pod "coredns-7c65d6cfc9-hrrjc" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:01.500344  585113 pod_ready.go:93] pod "coredns-7c65d6cfc9-hrrjc" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:01.500371  585113 pod_ready.go:82] duration metric: took 4.508490852s for pod "coredns-7c65d6cfc9-hrrjc" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:01.500382  585113 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:03.506621  585113 pod_ready.go:103] pod "etcd-embed-certs-789000" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:05.007677  585113 pod_ready.go:93] pod "etcd-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:05.007703  585113 pod_ready.go:82] duration metric: took 3.507315826s for pod "etcd-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:05.007713  585113 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:05.646790  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.647230  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has current primary IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.647264  585602 main.go:141] libmachine: (old-k8s-version-386085) Found IP for machine: 192.168.72.144
	I1205 20:31:05.647278  585602 main.go:141] libmachine: (old-k8s-version-386085) Reserving static IP address...
	I1205 20:31:05.647796  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "old-k8s-version-386085", mac: "52:54:00:6a:06:a4", ip: "192.168.72.144"} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.647834  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | skip adding static IP to network mk-old-k8s-version-386085 - found existing host DHCP lease matching {name: "old-k8s-version-386085", mac: "52:54:00:6a:06:a4", ip: "192.168.72.144"}
	I1205 20:31:05.647856  585602 main.go:141] libmachine: (old-k8s-version-386085) Reserved static IP address: 192.168.72.144
	I1205 20:31:05.647872  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | Getting to WaitForSSH function...
	I1205 20:31:05.647889  585602 main.go:141] libmachine: (old-k8s-version-386085) Waiting for SSH to be available...
	I1205 20:31:05.650296  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.650610  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.650643  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.650742  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | Using SSH client type: external
	I1205 20:31:05.650779  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa (-rw-------)
	I1205 20:31:05.650816  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:31:05.650837  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | About to run SSH command:
	I1205 20:31:05.650851  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | exit 0
	I1205 20:31:05.776876  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | SSH cmd err, output: <nil>: 
	I1205 20:31:05.777311  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetConfigRaw
	I1205 20:31:05.777948  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:31:05.780609  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.781053  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.781091  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.781319  585602 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/config.json ...
	I1205 20:31:05.781585  585602 machine.go:93] provisionDockerMachine start ...
	I1205 20:31:05.781607  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:05.781942  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:05.784729  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.785155  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.785191  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.785326  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:05.785491  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:05.785659  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:05.785886  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:05.786078  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:05.786309  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:05.786323  585602 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:31:05.893034  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 20:31:05.893079  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetMachineName
	I1205 20:31:05.893388  585602 buildroot.go:166] provisioning hostname "old-k8s-version-386085"
	I1205 20:31:05.893426  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetMachineName
	I1205 20:31:05.893623  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:05.896484  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.896883  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.896910  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.897031  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:05.897252  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:05.897441  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:05.897615  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:05.897796  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:05.897965  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:05.897977  585602 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-386085 && echo "old-k8s-version-386085" | sudo tee /etc/hostname
	I1205 20:31:06.017910  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-386085
	
	I1205 20:31:06.017939  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.020956  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.021298  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.021332  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.021494  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.021678  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.021863  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.021995  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.022137  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:06.022325  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:06.022342  585602 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-386085' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-386085/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-386085' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:31:06.138200  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:31:06.138234  585602 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:31:06.138261  585602 buildroot.go:174] setting up certificates
	I1205 20:31:06.138274  585602 provision.go:84] configureAuth start
	I1205 20:31:06.138287  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetMachineName
	I1205 20:31:06.138588  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:31:06.141488  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.141909  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.141965  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.142096  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.144144  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.144720  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.144742  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.144951  585602 provision.go:143] copyHostCerts
	I1205 20:31:06.145020  585602 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:31:06.145031  585602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:31:06.145085  585602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:31:06.145206  585602 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:31:06.145219  585602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:31:06.145248  585602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:31:06.145335  585602 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:31:06.145346  585602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:31:06.145376  585602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:31:06.145452  585602 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-386085 san=[127.0.0.1 192.168.72.144 localhost minikube old-k8s-version-386085]
	I1205 20:31:06.276466  585602 provision.go:177] copyRemoteCerts
	I1205 20:31:06.276530  585602 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:31:06.276559  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.279218  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.279550  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.279578  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.279766  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.279990  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.280152  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.280317  585602 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:31:06.362479  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:31:06.387631  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1205 20:31:06.413110  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:31:06.437931  585602 provision.go:87] duration metric: took 299.641033ms to configureAuth
	I1205 20:31:06.437962  585602 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:31:06.438176  585602 config.go:182] Loaded profile config "old-k8s-version-386085": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 20:31:06.438272  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.441059  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.441413  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.441444  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.441655  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.441846  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.441992  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.442174  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.442379  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:06.442552  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:06.442568  585602 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:31:06.655666  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:31:06.655699  585602 machine.go:96] duration metric: took 874.099032ms to provisionDockerMachine
	I1205 20:31:06.655713  585602 start.go:293] postStartSetup for "old-k8s-version-386085" (driver="kvm2")
	I1205 20:31:06.655723  585602 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:31:06.655752  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.656082  585602 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:31:06.656115  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.658835  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.659178  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.659229  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.659378  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.659636  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.659808  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.659971  585602 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:31:06.744484  585602 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:31:06.749025  585602 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:31:06.749060  585602 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:31:06.749134  585602 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:31:06.749273  585602 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:31:06.749411  585602 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:31:06.760720  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:06.785449  585602 start.go:296] duration metric: took 129.720092ms for postStartSetup
	I1205 20:31:06.785500  585602 fix.go:56] duration metric: took 23.328073686s for fixHost
	I1205 20:31:06.785526  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.788417  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.788797  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.788828  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.789049  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.789296  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.789483  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.789688  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.789870  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:06.790046  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:06.790065  585602 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:31:06.897781  585929 start.go:364] duration metric: took 3m3.751494327s to acquireMachinesLock for "default-k8s-diff-port-942599"
	I1205 20:31:06.897847  585929 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:31:06.897858  585929 fix.go:54] fixHost starting: 
	I1205 20:31:06.898355  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:31:06.898419  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:31:06.916556  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40927
	I1205 20:31:06.917111  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:31:06.917648  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:31:06.917674  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:31:06.918014  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:31:06.918256  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:06.918402  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetState
	I1205 20:31:06.920077  585929 fix.go:112] recreateIfNeeded on default-k8s-diff-port-942599: state=Stopped err=<nil>
	I1205 20:31:06.920105  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	W1205 20:31:06.920257  585929 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 20:31:06.922145  585929 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-942599" ...
	I1205 20:31:06.923548  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Start
	I1205 20:31:06.923770  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Ensuring networks are active...
	I1205 20:31:06.924750  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Ensuring network default is active
	I1205 20:31:06.925240  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Ensuring network mk-default-k8s-diff-port-942599 is active
	I1205 20:31:06.925721  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Getting domain xml...
	I1205 20:31:06.926719  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Creating domain...
	I1205 20:31:06.897579  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430666.872047181
	
	I1205 20:31:06.897606  585602 fix.go:216] guest clock: 1733430666.872047181
	I1205 20:31:06.897615  585602 fix.go:229] Guest: 2024-12-05 20:31:06.872047181 +0000 UTC Remote: 2024-12-05 20:31:06.785506394 +0000 UTC m=+234.970971247 (delta=86.540787ms)
	I1205 20:31:06.897679  585602 fix.go:200] guest clock delta is within tolerance: 86.540787ms
	I1205 20:31:06.897691  585602 start.go:83] releasing machines lock for "old-k8s-version-386085", held for 23.440303187s
	I1205 20:31:06.897727  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.898085  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:31:06.901127  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.901530  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.901567  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.901719  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.902413  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.902626  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.902776  585602 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:31:06.902827  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.902878  585602 ssh_runner.go:195] Run: cat /version.json
	I1205 20:31:06.902903  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.905664  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.905912  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.906050  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.906086  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.906256  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.906341  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.906367  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.906411  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.906517  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.906613  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.906684  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.906837  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.906849  585602 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:31:06.907112  585602 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:31:06.986078  585602 ssh_runner.go:195] Run: systemctl --version
	I1205 20:31:07.009500  585602 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:31:07.159146  585602 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:31:07.166263  585602 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:31:07.166358  585602 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:31:07.186021  585602 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:31:07.186063  585602 start.go:495] detecting cgroup driver to use...
	I1205 20:31:07.186140  585602 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:31:07.205074  585602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:31:07.221207  585602 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:31:07.221268  585602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:31:07.236669  585602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:31:07.252848  585602 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:31:07.369389  585602 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:31:07.504993  585602 docker.go:233] disabling docker service ...
	I1205 20:31:07.505101  585602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:31:07.523294  585602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:31:07.538595  585602 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:31:07.687830  585602 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:31:07.816176  585602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:31:07.833624  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:31:07.853409  585602 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1205 20:31:07.853478  585602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:07.865346  585602 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:31:07.865426  585602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:07.877962  585602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:07.889255  585602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:07.901632  585602 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:31:07.916169  585602 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:31:07.927092  585602 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:31:07.927169  585602 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:31:07.942288  585602 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:31:07.953314  585602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:08.092156  585602 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:31:08.205715  585602 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:31:08.205799  585602 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:31:08.214280  585602 start.go:563] Will wait 60s for crictl version
	I1205 20:31:08.214351  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:08.220837  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:31:08.265983  585602 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:31:08.266065  585602 ssh_runner.go:195] Run: crio --version
	I1205 20:31:08.295839  585602 ssh_runner.go:195] Run: crio --version
	I1205 20:31:08.327805  585602 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1205 20:31:07.014634  585113 pod_ready.go:103] pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:08.018024  585113 pod_ready.go:93] pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:08.018062  585113 pod_ready.go:82] duration metric: took 3.010340127s for pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.018080  585113 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.024700  585113 pod_ready.go:93] pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:08.024731  585113 pod_ready.go:82] duration metric: took 6.639434ms for pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.024744  585113 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-h79dj" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.030379  585113 pod_ready.go:93] pod "kube-proxy-h79dj" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:08.030399  585113 pod_ready.go:82] duration metric: took 5.648086ms for pod "kube-proxy-h79dj" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.030408  585113 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.036191  585113 pod_ready.go:93] pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:08.036211  585113 pod_ready.go:82] duration metric: took 5.797344ms for pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.036223  585113 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:10.051737  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:08.329278  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:31:08.332352  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:08.332700  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:08.332747  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:08.332930  585602 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1205 20:31:08.337611  585602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:08.350860  585602 kubeadm.go:883] updating cluster {Name:old-k8s-version-386085 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-386085 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:31:08.351016  585602 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 20:31:08.351090  585602 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:08.403640  585602 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 20:31:08.403716  585602 ssh_runner.go:195] Run: which lz4
	I1205 20:31:08.408211  585602 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 20:31:08.413136  585602 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:31:08.413168  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1205 20:31:10.209351  585602 crio.go:462] duration metric: took 1.801169802s to copy over tarball
	I1205 20:31:10.209438  585602 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:31:08.255781  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting to get IP...
	I1205 20:31:08.256721  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.257183  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.257262  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:08.257164  586715 retry.go:31] will retry after 301.077952ms: waiting for machine to come up
	I1205 20:31:08.559682  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.560187  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.560216  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:08.560130  586715 retry.go:31] will retry after 364.457823ms: waiting for machine to come up
	I1205 20:31:08.926774  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.927371  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.927401  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:08.927274  586715 retry.go:31] will retry after 461.958198ms: waiting for machine to come up
	I1205 20:31:09.390861  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:09.391502  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:09.391531  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:09.391432  586715 retry.go:31] will retry after 587.049038ms: waiting for machine to come up
	I1205 20:31:09.980451  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:09.980999  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:09.981026  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:09.980932  586715 retry.go:31] will retry after 499.551949ms: waiting for machine to come up
	I1205 20:31:10.482653  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:10.483188  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:10.483219  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:10.483135  586715 retry.go:31] will retry after 749.476034ms: waiting for machine to come up
	I1205 20:31:11.233788  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:11.234286  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:11.234315  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:11.234227  586715 retry.go:31] will retry after 768.81557ms: waiting for machine to come up
	I1205 20:31:12.004904  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:12.005427  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:12.005460  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:12.005382  586715 retry.go:31] will retry after 1.360132177s: waiting for machine to come up
	I1205 20:31:12.549406  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:15.043540  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:13.303553  585602 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.094044744s)
	I1205 20:31:13.303598  585602 crio.go:469] duration metric: took 3.094215888s to extract the tarball
	I1205 20:31:13.303610  585602 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:31:13.350989  585602 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:13.388660  585602 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 20:31:13.388702  585602 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 20:31:13.388814  585602 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:13.388822  585602 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.388832  585602 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.388853  585602 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.388881  585602 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.388904  585602 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1205 20:31:13.388823  585602 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.388859  585602 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.390414  585602 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1205 20:31:13.390924  585602 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.390941  585602 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.390924  585602 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.391016  585602 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.390927  585602 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.391373  585602 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.391378  585602 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:13.565006  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.577450  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1205 20:31:13.584653  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.597086  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.619848  585602 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1205 20:31:13.619899  585602 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.619955  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.623277  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.628407  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.697151  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.703111  585602 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1205 20:31:13.703167  585602 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1205 20:31:13.703219  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.736004  585602 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1205 20:31:13.736059  585602 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.736058  585602 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1205 20:31:13.736078  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.736094  585602 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.736104  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.736135  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.736187  585602 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1205 20:31:13.736207  585602 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.736235  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.783651  585602 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1205 20:31:13.783706  585602 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.783758  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.787597  585602 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1205 20:31:13.787649  585602 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.787656  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 20:31:13.787692  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.828445  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.828491  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.828544  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.828573  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.828616  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.828635  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.890937  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 20:31:13.992480  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.992480  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.992600  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.992661  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.992725  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.992780  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:14.095364  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 20:31:14.095462  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:14.163224  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1205 20:31:14.163320  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:14.163339  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:14.163420  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 20:31:14.163510  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:14.243805  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1205 20:31:14.243860  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1205 20:31:14.243881  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1205 20:31:14.287718  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1205 20:31:14.290994  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1205 20:31:14.291049  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1205 20:31:14.579648  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:14.728232  585602 cache_images.go:92] duration metric: took 1.339506459s to LoadCachedImages
	W1205 20:31:14.728389  585602 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I1205 20:31:14.728417  585602 kubeadm.go:934] updating node { 192.168.72.144 8443 v1.20.0 crio true true} ...
	I1205 20:31:14.728570  585602 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-386085 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386085 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:31:14.728672  585602 ssh_runner.go:195] Run: crio config
	I1205 20:31:14.778932  585602 cni.go:84] Creating CNI manager for ""
	I1205 20:31:14.778957  585602 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:31:14.778967  585602 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:31:14.778987  585602 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.144 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-386085 NodeName:old-k8s-version-386085 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1205 20:31:14.779131  585602 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-386085"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:31:14.779196  585602 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1205 20:31:14.792400  585602 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:31:14.792494  585602 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:31:14.802873  585602 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1205 20:31:14.821562  585602 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:31:14.839442  585602 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1205 20:31:14.861314  585602 ssh_runner.go:195] Run: grep 192.168.72.144	control-plane.minikube.internal$ /etc/hosts
	I1205 20:31:14.865457  585602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:14.878278  585602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:15.002193  585602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:31:15.030699  585602 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085 for IP: 192.168.72.144
	I1205 20:31:15.030734  585602 certs.go:194] generating shared ca certs ...
	I1205 20:31:15.030758  585602 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:31:15.030975  585602 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:31:15.031027  585602 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:31:15.031048  585602 certs.go:256] generating profile certs ...
	I1205 20:31:15.031206  585602 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.key
	I1205 20:31:15.031276  585602 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.key.87b35b18
	I1205 20:31:15.031324  585602 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.key
	I1205 20:31:15.031489  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:31:15.031535  585602 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:31:15.031550  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:31:15.031581  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:31:15.031612  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:31:15.031644  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:31:15.031698  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:15.032410  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:31:15.063090  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:31:15.094212  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:31:15.124685  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:31:15.159953  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1205 20:31:15.204250  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:31:15.237483  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:31:15.276431  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:31:15.303774  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:31:15.328872  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:31:15.353852  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:31:15.380916  585602 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:31:15.401082  585602 ssh_runner.go:195] Run: openssl version
	I1205 20:31:15.407442  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:31:15.420377  585602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:15.425721  585602 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:15.425800  585602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:15.432475  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:31:15.446140  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:31:15.459709  585602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:31:15.465165  585602 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:31:15.465241  585602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:31:15.471609  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:31:15.484139  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:31:15.496636  585602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:31:15.501575  585602 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:31:15.501634  585602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:31:15.507814  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:31:15.521234  585602 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:31:15.526452  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:31:15.532999  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:31:15.540680  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:31:15.547455  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:31:15.553996  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:31:15.560574  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:31:15.568489  585602 kubeadm.go:392] StartCluster: {Name:old-k8s-version-386085 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-386085 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:31:15.568602  585602 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:31:15.568682  585602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:31:15.610693  585602 cri.go:89] found id: ""
	I1205 20:31:15.610808  585602 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:31:15.622685  585602 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 20:31:15.622709  585602 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 20:31:15.622764  585602 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:31:15.633754  585602 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:31:15.634922  585602 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-386085" does not appear in /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:31:15.635682  585602 kubeconfig.go:62] /home/jenkins/minikube-integration/20052-530897/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-386085" cluster setting kubeconfig missing "old-k8s-version-386085" context setting]
	I1205 20:31:15.636878  585602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:31:15.719767  585602 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:31:15.731576  585602 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.144
	I1205 20:31:15.731622  585602 kubeadm.go:1160] stopping kube-system containers ...
	I1205 20:31:15.731639  585602 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:31:15.731705  585602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:31:15.777769  585602 cri.go:89] found id: ""
	I1205 20:31:15.777875  585602 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:31:15.797121  585602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:31:15.807961  585602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:31:15.807991  585602 kubeadm.go:157] found existing configuration files:
	
	I1205 20:31:15.808042  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:31:15.818177  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:31:15.818270  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:31:15.829092  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:31:15.839471  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:31:15.839564  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:31:15.850035  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:31:15.859907  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:31:15.859984  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:31:15.870882  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:31:15.881475  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:31:15.881549  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:31:15.892078  585602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:31:15.904312  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:16.042308  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:16.787487  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:13.367666  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:13.368154  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:13.368185  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:13.368096  586715 retry.go:31] will retry after 1.319101375s: waiting for machine to come up
	I1205 20:31:14.689562  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:14.690039  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:14.690067  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:14.689996  586715 retry.go:31] will retry after 2.267379471s: waiting for machine to come up
	I1205 20:31:16.959412  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:16.959882  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:16.959915  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:16.959804  586715 retry.go:31] will retry after 2.871837018s: waiting for machine to come up
	I1205 20:31:17.044878  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:19.543265  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:17.036864  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:17.128855  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:17.219276  585602 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:31:17.219380  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:17.720206  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:18.219623  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:18.719555  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:19.219776  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:19.719967  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:20.219686  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:20.719806  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:21.219875  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:21.719915  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:19.834750  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:19.835299  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:19.835326  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:19.835203  586715 retry.go:31] will retry after 2.740879193s: waiting for machine to come up
	I1205 20:31:22.577264  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:22.577746  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:22.577775  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:22.577709  586715 retry.go:31] will retry after 3.807887487s: waiting for machine to come up
	I1205 20:31:22.043635  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:24.543255  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:22.219930  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:22.719848  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:23.219674  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:23.719903  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:24.220505  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:24.719726  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:25.220161  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:25.720115  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:26.220399  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:26.719567  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:27.669618  585025 start.go:364] duration metric: took 59.106849765s to acquireMachinesLock for "no-preload-816185"
	I1205 20:31:27.669680  585025 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:31:27.669689  585025 fix.go:54] fixHost starting: 
	I1205 20:31:27.670111  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:31:27.670153  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:31:27.689600  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40519
	I1205 20:31:27.690043  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:31:27.690508  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:31:27.690530  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:31:27.690931  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:31:27.691146  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:27.691279  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetState
	I1205 20:31:27.692881  585025 fix.go:112] recreateIfNeeded on no-preload-816185: state=Stopped err=<nil>
	I1205 20:31:27.692905  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	W1205 20:31:27.693059  585025 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 20:31:27.694833  585025 out.go:177] * Restarting existing kvm2 VM for "no-preload-816185" ...
	I1205 20:31:26.389296  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.389828  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Found IP for machine: 192.168.50.96
	I1205 20:31:26.389866  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has current primary IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.389876  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Reserving static IP address...
	I1205 20:31:26.390321  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Reserved static IP address: 192.168.50.96
	I1205 20:31:26.390354  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for SSH to be available...
	I1205 20:31:26.390380  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-942599", mac: "52:54:00:f6:dd:0f", ip: "192.168.50.96"} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.390404  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | skip adding static IP to network mk-default-k8s-diff-port-942599 - found existing host DHCP lease matching {name: "default-k8s-diff-port-942599", mac: "52:54:00:f6:dd:0f", ip: "192.168.50.96"}
	I1205 20:31:26.390420  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Getting to WaitForSSH function...
	I1205 20:31:26.392509  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.392875  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.392912  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.392933  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Using SSH client type: external
	I1205 20:31:26.392988  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa (-rw-------)
	I1205 20:31:26.393057  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:31:26.393086  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | About to run SSH command:
	I1205 20:31:26.393105  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | exit 0
	I1205 20:31:26.520867  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | SSH cmd err, output: <nil>: 
	I1205 20:31:26.521212  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetConfigRaw
	I1205 20:31:26.521857  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetIP
	I1205 20:31:26.524512  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.524853  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.524883  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.525141  585929 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/config.json ...
	I1205 20:31:26.525404  585929 machine.go:93] provisionDockerMachine start ...
	I1205 20:31:26.525425  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:26.525639  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:26.527806  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.528094  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.528121  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.528257  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:26.528474  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.528635  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.528771  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:26.528902  585929 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:26.529132  585929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I1205 20:31:26.529147  585929 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:31:26.645385  585929 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 20:31:26.645429  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetMachineName
	I1205 20:31:26.645719  585929 buildroot.go:166] provisioning hostname "default-k8s-diff-port-942599"
	I1205 20:31:26.645751  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetMachineName
	I1205 20:31:26.645962  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:26.648906  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.649316  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.649346  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.649473  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:26.649686  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.649880  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.649998  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:26.650161  585929 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:26.650338  585929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I1205 20:31:26.650354  585929 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-942599 && echo "default-k8s-diff-port-942599" | sudo tee /etc/hostname
	I1205 20:31:26.780217  585929 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-942599
	
	I1205 20:31:26.780253  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:26.783240  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.783628  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.783660  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.783804  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:26.783997  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.784162  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.784321  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:26.784530  585929 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:26.784747  585929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I1205 20:31:26.784766  585929 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-942599' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-942599/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-942599' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:31:26.909975  585929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:31:26.910006  585929 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:31:26.910087  585929 buildroot.go:174] setting up certificates
	I1205 20:31:26.910101  585929 provision.go:84] configureAuth start
	I1205 20:31:26.910114  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetMachineName
	I1205 20:31:26.910440  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetIP
	I1205 20:31:26.913667  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.914067  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.914094  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.914321  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:26.917031  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.917430  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.917462  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.917608  585929 provision.go:143] copyHostCerts
	I1205 20:31:26.917681  585929 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:31:26.917706  585929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:31:26.917772  585929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:31:26.917889  585929 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:31:26.917900  585929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:31:26.917935  585929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:31:26.918013  585929 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:31:26.918023  585929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:31:26.918065  585929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:31:26.918163  585929 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-942599 san=[127.0.0.1 192.168.50.96 default-k8s-diff-port-942599 localhost minikube]
	I1205 20:31:27.003691  585929 provision.go:177] copyRemoteCerts
	I1205 20:31:27.003783  585929 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:31:27.003821  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.006311  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.006632  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.006665  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.006820  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.007011  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.007153  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.007274  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:31:27.094973  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:31:27.121684  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1205 20:31:27.146420  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:31:27.171049  585929 provision.go:87] duration metric: took 260.930345ms to configureAuth
	I1205 20:31:27.171083  585929 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:31:27.171268  585929 config.go:182] Loaded profile config "default-k8s-diff-port-942599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:31:27.171385  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.174287  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.174677  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.174717  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.174946  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.175168  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.175338  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.175531  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.175703  585929 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:27.175927  585929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I1205 20:31:27.175959  585929 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:31:27.416697  585929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:31:27.416724  585929 machine.go:96] duration metric: took 891.305367ms to provisionDockerMachine
	I1205 20:31:27.416737  585929 start.go:293] postStartSetup for "default-k8s-diff-port-942599" (driver="kvm2")
	I1205 20:31:27.416748  585929 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:31:27.416786  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:27.417143  585929 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:31:27.417183  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.419694  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.420041  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.420072  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.420259  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.420488  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.420681  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.420813  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:31:27.507592  585929 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:31:27.512178  585929 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:31:27.512209  585929 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:31:27.512297  585929 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:31:27.512416  585929 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:31:27.512544  585929 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:31:27.522860  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:27.550167  585929 start.go:296] duration metric: took 133.414654ms for postStartSetup
	I1205 20:31:27.550211  585929 fix.go:56] duration metric: took 20.652352836s for fixHost
	I1205 20:31:27.550240  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.553056  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.553456  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.553490  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.553631  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.553822  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.554007  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.554166  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.554372  585929 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:27.554584  585929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I1205 20:31:27.554603  585929 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:31:27.669428  585929 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430687.619179277
	
	I1205 20:31:27.669455  585929 fix.go:216] guest clock: 1733430687.619179277
	I1205 20:31:27.669467  585929 fix.go:229] Guest: 2024-12-05 20:31:27.619179277 +0000 UTC Remote: 2024-12-05 20:31:27.550217419 +0000 UTC m=+204.551998169 (delta=68.961858ms)
	I1205 20:31:27.669506  585929 fix.go:200] guest clock delta is within tolerance: 68.961858ms
	I1205 20:31:27.669514  585929 start.go:83] releasing machines lock for "default-k8s-diff-port-942599", held for 20.771694403s
	I1205 20:31:27.669559  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:27.669877  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetIP
	I1205 20:31:27.672547  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.672978  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.673009  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.673224  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:27.673788  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:27.673992  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:27.674125  585929 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:31:27.674176  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.674201  585929 ssh_runner.go:195] Run: cat /version.json
	I1205 20:31:27.674231  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.677006  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.677388  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.677418  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.677437  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.677565  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.677745  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.677919  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.677925  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.677948  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.678115  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.678107  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:31:27.678258  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.678382  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.678527  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:31:27.790786  585929 ssh_runner.go:195] Run: systemctl --version
	I1205 20:31:27.797092  585929 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:31:27.946053  585929 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:31:27.953979  585929 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:31:27.954073  585929 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:31:27.975059  585929 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:31:27.975090  585929 start.go:495] detecting cgroup driver to use...
	I1205 20:31:27.975160  585929 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:31:27.991738  585929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:31:28.006412  585929 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:31:28.006529  585929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:31:28.021329  585929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:31:28.037390  585929 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:31:28.155470  585929 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:31:28.326332  585929 docker.go:233] disabling docker service ...
	I1205 20:31:28.326415  585929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:31:28.343299  585929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:31:28.358147  585929 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:31:28.493547  585929 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:31:28.631184  585929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:31:28.647267  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:31:28.670176  585929 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:31:28.670269  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.686230  585929 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:31:28.686312  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.702991  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.715390  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.731909  585929 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:31:28.745042  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.757462  585929 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.779049  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.790960  585929 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:31:28.806652  585929 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:31:28.806724  585929 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:31:28.821835  585929 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:31:28.832688  585929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:28.967877  585929 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:31:29.084571  585929 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:31:29.084666  585929 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:31:29.089892  585929 start.go:563] Will wait 60s for crictl version
	I1205 20:31:29.089958  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:31:29.094021  585929 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:31:29.132755  585929 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:31:29.132843  585929 ssh_runner.go:195] Run: crio --version
	I1205 20:31:29.161779  585929 ssh_runner.go:195] Run: crio --version
	I1205 20:31:29.194415  585929 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:31:27.042893  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:29.545284  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:27.696342  585025 main.go:141] libmachine: (no-preload-816185) Calling .Start
	I1205 20:31:27.696546  585025 main.go:141] libmachine: (no-preload-816185) Ensuring networks are active...
	I1205 20:31:27.697272  585025 main.go:141] libmachine: (no-preload-816185) Ensuring network default is active
	I1205 20:31:27.697720  585025 main.go:141] libmachine: (no-preload-816185) Ensuring network mk-no-preload-816185 is active
	I1205 20:31:27.698153  585025 main.go:141] libmachine: (no-preload-816185) Getting domain xml...
	I1205 20:31:27.698993  585025 main.go:141] libmachine: (no-preload-816185) Creating domain...
	I1205 20:31:29.005551  585025 main.go:141] libmachine: (no-preload-816185) Waiting to get IP...
	I1205 20:31:29.006633  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:29.007124  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:29.007217  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:29.007100  586921 retry.go:31] will retry after 264.716976ms: waiting for machine to come up
	I1205 20:31:29.273821  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:29.274364  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:29.274393  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:29.274318  586921 retry.go:31] will retry after 307.156436ms: waiting for machine to come up
	I1205 20:31:29.582968  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:29.583583  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:29.583621  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:29.583531  586921 retry.go:31] will retry after 335.63624ms: waiting for machine to come up
	I1205 20:31:29.921262  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:29.921823  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:29.921855  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:29.921771  586921 retry.go:31] will retry after 577.408278ms: waiting for machine to come up
	I1205 20:31:30.500556  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:30.501058  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:30.501095  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:30.500999  586921 retry.go:31] will retry after 757.019094ms: waiting for machine to come up
	I1205 20:31:27.220124  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:27.719460  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:28.220187  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:28.719599  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:29.219672  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:29.720450  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:30.220436  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:30.719573  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:31.220357  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:31.720052  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:29.195845  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetIP
	I1205 20:31:29.198779  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:29.199138  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:29.199171  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:29.199365  585929 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1205 20:31:29.204553  585929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:29.217722  585929 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-942599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-942599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.96 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:31:29.217873  585929 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:31:29.217943  585929 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:29.259006  585929 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 20:31:29.259105  585929 ssh_runner.go:195] Run: which lz4
	I1205 20:31:29.264049  585929 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 20:31:29.268978  585929 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:31:29.269019  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 20:31:30.811247  585929 crio.go:462] duration metric: took 1.547244528s to copy over tarball
	I1205 20:31:30.811340  585929 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:31:32.043543  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:34.044420  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:31.260083  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:31.260626  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:31.260658  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:31.260593  586921 retry.go:31] will retry after 593.111543ms: waiting for machine to come up
	I1205 20:31:31.854850  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:31.855286  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:31.855316  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:31.855224  586921 retry.go:31] will retry after 832.693762ms: waiting for machine to come up
	I1205 20:31:32.690035  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:32.690489  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:32.690515  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:32.690448  586921 retry.go:31] will retry after 1.128242733s: waiting for machine to come up
	I1205 20:31:33.820162  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:33.820798  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:33.820831  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:33.820732  586921 retry.go:31] will retry after 1.331730925s: waiting for machine to come up
	I1205 20:31:35.154230  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:35.154661  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:35.154690  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:35.154590  586921 retry.go:31] will retry after 2.19623815s: waiting for machine to come up
	I1205 20:31:32.220318  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:32.719780  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:33.220114  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:33.719554  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:34.220187  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:34.720021  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:35.219461  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:35.720334  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.219480  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.720159  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:33.093756  585929 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.282380101s)
	I1205 20:31:33.093791  585929 crio.go:469] duration metric: took 2.282510298s to extract the tarball
	I1205 20:31:33.093802  585929 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:31:33.132232  585929 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:33.188834  585929 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:31:33.188868  585929 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:31:33.188879  585929 kubeadm.go:934] updating node { 192.168.50.96 8444 v1.31.2 crio true true} ...
	I1205 20:31:33.189027  585929 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-942599 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-942599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:31:33.189114  585929 ssh_runner.go:195] Run: crio config
	I1205 20:31:33.235586  585929 cni.go:84] Creating CNI manager for ""
	I1205 20:31:33.235611  585929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:31:33.235621  585929 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:31:33.235644  585929 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.96 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-942599 NodeName:default-k8s-diff-port-942599 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:31:33.235770  585929 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.96
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-942599"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.96"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.96"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:31:33.235835  585929 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:31:33.246737  585929 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:31:33.246829  585929 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:31:33.257763  585929 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1205 20:31:33.276025  585929 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:31:33.294008  585929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1205 20:31:33.311640  585929 ssh_runner.go:195] Run: grep 192.168.50.96	control-plane.minikube.internal$ /etc/hosts
	I1205 20:31:33.315963  585929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.96	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:33.328834  585929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:33.439221  585929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:31:33.457075  585929 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599 for IP: 192.168.50.96
	I1205 20:31:33.457103  585929 certs.go:194] generating shared ca certs ...
	I1205 20:31:33.457131  585929 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:31:33.457337  585929 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:31:33.457407  585929 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:31:33.457420  585929 certs.go:256] generating profile certs ...
	I1205 20:31:33.457528  585929 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/client.key
	I1205 20:31:33.457612  585929 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/apiserver.key.d50b8fb2
	I1205 20:31:33.457668  585929 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/proxy-client.key
	I1205 20:31:33.457824  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:31:33.457870  585929 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:31:33.457885  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:31:33.457924  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:31:33.457959  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:31:33.457989  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:31:33.458044  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:33.459092  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:31:33.502129  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:31:33.533461  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:31:33.572210  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:31:33.597643  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1205 20:31:33.621382  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:31:33.648568  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:31:33.682320  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:31:33.707415  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:31:33.733418  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:31:33.760333  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:31:33.794070  585929 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:31:33.813531  585929 ssh_runner.go:195] Run: openssl version
	I1205 20:31:33.820336  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:31:33.832321  585929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:33.839066  585929 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:33.839135  585929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:33.845526  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:31:33.857376  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:31:33.868864  585929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:31:33.873732  585929 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:31:33.873799  585929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:31:33.881275  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:31:33.893144  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:31:33.904679  585929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:31:33.909686  585929 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:31:33.909760  585929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:31:33.915937  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:31:33.927401  585929 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:31:33.932326  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:31:33.939165  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:31:33.945630  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:31:33.951867  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:31:33.957857  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:31:33.963994  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:31:33.969964  585929 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-942599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-942599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.96 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:31:33.970050  585929 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:31:33.970103  585929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:31:34.016733  585929 cri.go:89] found id: ""
	I1205 20:31:34.016814  585929 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:31:34.027459  585929 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 20:31:34.027478  585929 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 20:31:34.027523  585929 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:31:34.037483  585929 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:31:34.038588  585929 kubeconfig.go:125] found "default-k8s-diff-port-942599" server: "https://192.168.50.96:8444"
	I1205 20:31:34.041140  585929 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:31:34.050903  585929 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.96
	I1205 20:31:34.050938  585929 kubeadm.go:1160] stopping kube-system containers ...
	I1205 20:31:34.050956  585929 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:31:34.051014  585929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:31:34.090840  585929 cri.go:89] found id: ""
	I1205 20:31:34.090932  585929 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:31:34.107686  585929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:31:34.118277  585929 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:31:34.118305  585929 kubeadm.go:157] found existing configuration files:
	
	I1205 20:31:34.118359  585929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1205 20:31:34.127654  585929 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:31:34.127733  585929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:31:34.137295  585929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1205 20:31:34.147005  585929 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:31:34.147076  585929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:31:34.158576  585929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1205 20:31:34.167933  585929 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:31:34.168022  585929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:31:34.177897  585929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1205 20:31:34.187467  585929 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:31:34.187539  585929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:31:34.197825  585929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:31:34.210775  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:34.337491  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:35.308389  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:35.549708  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:35.624390  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:35.706794  585929 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:31:35.706912  585929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.207620  585929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.707990  585929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.727214  585929 api_server.go:72] duration metric: took 1.020418782s to wait for apiserver process to appear ...
	I1205 20:31:36.727257  585929 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:31:36.727289  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:36.727908  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": dial tcp 192.168.50.96:8444: connect: connection refused
	I1205 20:31:37.228102  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:36.544564  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:39.043806  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:37.352371  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:37.352911  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:37.352946  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:37.352862  586921 retry.go:31] will retry after 2.333670622s: waiting for machine to come up
	I1205 20:31:39.688034  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:39.688597  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:39.688630  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:39.688537  586921 retry.go:31] will retry after 2.476657304s: waiting for machine to come up
	I1205 20:31:37.219933  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:37.720360  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:38.219574  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:38.720034  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:39.219449  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:39.719752  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:40.219718  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:40.719771  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:41.219548  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:41.720381  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:42.228416  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:31:42.228489  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:41.044569  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:43.542439  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:45.543063  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:42.168384  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:42.168759  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:42.168781  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:42.168719  586921 retry.go:31] will retry after 3.531210877s: waiting for machine to come up
	I1205 20:31:45.701387  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.701831  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has current primary IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.701868  585025 main.go:141] libmachine: (no-preload-816185) Found IP for machine: 192.168.61.37
	I1205 20:31:45.701882  585025 main.go:141] libmachine: (no-preload-816185) Reserving static IP address...
	I1205 20:31:45.702270  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "no-preload-816185", mac: "52:54:00:5f:85:a7", ip: "192.168.61.37"} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:45.702313  585025 main.go:141] libmachine: (no-preload-816185) DBG | skip adding static IP to network mk-no-preload-816185 - found existing host DHCP lease matching {name: "no-preload-816185", mac: "52:54:00:5f:85:a7", ip: "192.168.61.37"}
	I1205 20:31:45.702327  585025 main.go:141] libmachine: (no-preload-816185) Reserved static IP address: 192.168.61.37
	I1205 20:31:45.702343  585025 main.go:141] libmachine: (no-preload-816185) Waiting for SSH to be available...
	I1205 20:31:45.702355  585025 main.go:141] libmachine: (no-preload-816185) DBG | Getting to WaitForSSH function...
	I1205 20:31:45.704606  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.704941  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:45.704964  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.705115  585025 main.go:141] libmachine: (no-preload-816185) DBG | Using SSH client type: external
	I1205 20:31:45.705146  585025 main.go:141] libmachine: (no-preload-816185) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa (-rw-------)
	I1205 20:31:45.705181  585025 main.go:141] libmachine: (no-preload-816185) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.37 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:31:45.705212  585025 main.go:141] libmachine: (no-preload-816185) DBG | About to run SSH command:
	I1205 20:31:45.705224  585025 main.go:141] libmachine: (no-preload-816185) DBG | exit 0
	I1205 20:31:45.828472  585025 main.go:141] libmachine: (no-preload-816185) DBG | SSH cmd err, output: <nil>: 
	I1205 20:31:45.828882  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetConfigRaw
	I1205 20:31:45.829596  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetIP
	I1205 20:31:45.832338  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.832643  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:45.832671  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.832970  585025 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/config.json ...
	I1205 20:31:45.833244  585025 machine.go:93] provisionDockerMachine start ...
	I1205 20:31:45.833275  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:45.833498  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:45.835937  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.836344  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:45.836375  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.836555  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:45.836744  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:45.836906  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:45.837046  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:45.837207  585025 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:45.837441  585025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.37 22 <nil> <nil>}
	I1205 20:31:45.837456  585025 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:31:45.940890  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 20:31:45.940926  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetMachineName
	I1205 20:31:45.941234  585025 buildroot.go:166] provisioning hostname "no-preload-816185"
	I1205 20:31:45.941262  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetMachineName
	I1205 20:31:45.941453  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:45.944124  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.944537  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:45.944585  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.944677  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:45.944862  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:45.945026  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:45.945169  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:45.945343  585025 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:45.945511  585025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.37 22 <nil> <nil>}
	I1205 20:31:45.945523  585025 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-816185 && echo "no-preload-816185" | sudo tee /etc/hostname
	I1205 20:31:42.220435  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:42.720366  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:43.219567  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:43.719652  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:44.220259  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:44.719556  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:45.219850  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:45.720302  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:46.220377  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:46.720107  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:47.229369  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:31:47.229421  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:46.063755  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-816185
	
	I1205 20:31:46.063794  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:46.066742  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.067177  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.067208  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.067371  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:46.067576  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.067756  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.067937  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:46.068147  585025 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:46.068392  585025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.37 22 <nil> <nil>}
	I1205 20:31:46.068411  585025 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-816185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-816185/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-816185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:31:46.182072  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:31:46.182110  585025 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:31:46.182144  585025 buildroot.go:174] setting up certificates
	I1205 20:31:46.182160  585025 provision.go:84] configureAuth start
	I1205 20:31:46.182172  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetMachineName
	I1205 20:31:46.182490  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetIP
	I1205 20:31:46.185131  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.185461  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.185493  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.185684  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:46.188070  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.188467  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.188499  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.188606  585025 provision.go:143] copyHostCerts
	I1205 20:31:46.188674  585025 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:31:46.188695  585025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:31:46.188753  585025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:31:46.188860  585025 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:31:46.188872  585025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:31:46.188892  585025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:31:46.188973  585025 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:31:46.188980  585025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:31:46.188998  585025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:31:46.189044  585025 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.no-preload-816185 san=[127.0.0.1 192.168.61.37 localhost minikube no-preload-816185]
	I1205 20:31:46.460195  585025 provision.go:177] copyRemoteCerts
	I1205 20:31:46.460323  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:31:46.460394  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:46.463701  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.464171  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.464224  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.464422  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:46.464646  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.464839  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:46.465024  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:31:46.557665  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 20:31:46.583225  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:31:46.608114  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:31:46.633059  585025 provision.go:87] duration metric: took 450.879004ms to configureAuth
	I1205 20:31:46.633100  585025 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:31:46.633319  585025 config.go:182] Loaded profile config "no-preload-816185": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:31:46.633400  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:46.636634  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.637103  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.637138  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.637368  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:46.637624  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.637841  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.638000  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:46.638189  585025 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:46.638425  585025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.37 22 <nil> <nil>}
	I1205 20:31:46.638442  585025 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:31:46.877574  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:31:46.877610  585025 machine.go:96] duration metric: took 1.044347044s to provisionDockerMachine
	I1205 20:31:46.877623  585025 start.go:293] postStartSetup for "no-preload-816185" (driver="kvm2")
	I1205 20:31:46.877634  585025 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:31:46.877668  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:46.878007  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:31:46.878046  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:46.881022  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.881361  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.881422  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.881554  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:46.881741  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.881883  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:46.882045  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:31:46.967997  585025 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:31:46.972667  585025 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:31:46.972697  585025 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:31:46.972770  585025 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:31:46.972844  585025 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:31:46.972931  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:31:46.983157  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:47.009228  585025 start.go:296] duration metric: took 131.588013ms for postStartSetup
	I1205 20:31:47.009272  585025 fix.go:56] duration metric: took 19.33958416s for fixHost
	I1205 20:31:47.009296  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:47.012039  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.012388  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:47.012416  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.012620  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:47.012858  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:47.013022  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:47.013166  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:47.013318  585025 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:47.013490  585025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.37 22 <nil> <nil>}
	I1205 20:31:47.013501  585025 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:31:47.117166  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430707.083043174
	
	I1205 20:31:47.117195  585025 fix.go:216] guest clock: 1733430707.083043174
	I1205 20:31:47.117203  585025 fix.go:229] Guest: 2024-12-05 20:31:47.083043174 +0000 UTC Remote: 2024-12-05 20:31:47.009275956 +0000 UTC m=+361.003271038 (delta=73.767218ms)
	I1205 20:31:47.117226  585025 fix.go:200] guest clock delta is within tolerance: 73.767218ms
	I1205 20:31:47.117232  585025 start.go:83] releasing machines lock for "no-preload-816185", held for 19.447576666s
	I1205 20:31:47.117259  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:47.117541  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetIP
	I1205 20:31:47.120283  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.120627  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:47.120653  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.120805  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:47.121301  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:47.121492  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:47.121612  585025 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:31:47.121656  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:47.121727  585025 ssh_runner.go:195] Run: cat /version.json
	I1205 20:31:47.121750  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:47.124146  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.124387  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.124503  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:47.124530  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.124723  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:47.124745  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.124745  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:47.124922  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:47.124933  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:47.125086  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:47.125126  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:47.125227  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:31:47.125505  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:47.125653  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:31:47.221731  585025 ssh_runner.go:195] Run: systemctl --version
	I1205 20:31:47.228177  585025 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:31:47.377695  585025 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:31:47.384534  585025 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:31:47.384623  585025 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:31:47.402354  585025 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:31:47.402388  585025 start.go:495] detecting cgroup driver to use...
	I1205 20:31:47.402454  585025 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:31:47.426593  585025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:31:47.443953  585025 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:31:47.444011  585025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:31:47.461107  585025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:31:47.477872  585025 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:31:47.617097  585025 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:31:47.780021  585025 docker.go:233] disabling docker service ...
	I1205 20:31:47.780140  585025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:31:47.795745  585025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:31:47.809573  585025 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:31:47.959910  585025 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:31:48.081465  585025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:31:48.096513  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:31:48.116342  585025 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:31:48.116409  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.128016  585025 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:31:48.128095  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.139511  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.151241  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.162858  585025 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:31:48.174755  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.185958  585025 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.203724  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.215682  585025 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:31:48.226478  585025 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:31:48.226551  585025 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:31:48.242781  585025 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:31:48.254921  585025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:48.373925  585025 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:31:48.471515  585025 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:31:48.471625  585025 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:31:48.477640  585025 start.go:563] Will wait 60s for crictl version
	I1205 20:31:48.477707  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:48.481862  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:31:48.521367  585025 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:31:48.521465  585025 ssh_runner.go:195] Run: crio --version
	I1205 20:31:48.552343  585025 ssh_runner.go:195] Run: crio --version
	I1205 20:31:48.583089  585025 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:31:48.043043  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:50.043172  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:48.584504  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetIP
	I1205 20:31:48.587210  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:48.587539  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:48.587568  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:48.587788  585025 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1205 20:31:48.592190  585025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:48.606434  585025 kubeadm.go:883] updating cluster {Name:no-preload-816185 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-816185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.37 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:31:48.606605  585025 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:31:48.606666  585025 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:48.642948  585025 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 20:31:48.642978  585025 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 20:31:48.643061  585025 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:48.643116  585025 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:48.643092  585025 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:48.643168  585025 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:48.643075  585025 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:48.643116  585025 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:48.643248  585025 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1205 20:31:48.643119  585025 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:48.644692  585025 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:48.644712  585025 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1205 20:31:48.644694  585025 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:48.644798  585025 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:48.644800  585025 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:48.644824  585025 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:48.644858  585025 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:48.644824  585025 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:48.811007  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:48.819346  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:48.859678  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1205 20:31:48.864065  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:48.864191  585025 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1205 20:31:48.864249  585025 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:48.864310  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:48.883959  585025 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1205 20:31:48.884022  585025 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:48.884078  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:48.902180  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:48.918167  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:48.946617  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:49.039706  585025 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1205 20:31:49.039760  585025 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:49.039783  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:49.039808  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:49.039869  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:49.039887  585025 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1205 20:31:49.039913  585025 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:49.039938  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:49.039947  585025 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1205 20:31:49.039969  585025 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:49.040001  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:49.040002  585025 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1205 20:31:49.040026  585025 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:49.040069  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:49.098900  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:49.098990  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:49.105551  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:49.105588  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:49.105612  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:49.105646  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:49.201473  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:49.218211  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:49.257277  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:49.257335  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:49.257345  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:49.257479  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:49.316037  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1205 20:31:49.316135  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:49.316159  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 20:31:49.356780  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1205 20:31:49.356906  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1205 20:31:49.382843  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:49.405772  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:49.405863  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:49.428491  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1205 20:31:49.428541  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1205 20:31:49.428563  585025 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 20:31:49.428587  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1205 20:31:49.428611  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 20:31:49.428648  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 20:31:49.487794  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1205 20:31:49.487825  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1205 20:31:49.487893  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1205 20:31:49.487917  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1205 20:31:49.487927  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 20:31:49.488022  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 20:31:49.830311  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:47.219913  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:47.720441  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:48.220220  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:48.719997  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:49.219843  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:49.719591  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:50.220132  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:50.719528  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:51.219674  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:51.720234  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:52.230527  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:31:52.230575  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:52.543415  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:55.042668  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:52.150499  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.721854606s)
	I1205 20:31:52.150547  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1205 20:31:52.150573  585025 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1205 20:31:52.150588  585025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.721911838s)
	I1205 20:31:52.150623  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1205 20:31:52.150627  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1205 20:31:52.150697  585025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: (2.662646854s)
	I1205 20:31:52.150727  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1205 20:31:52.150752  585025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.662648047s)
	I1205 20:31:52.150776  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1205 20:31:52.150785  585025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.662799282s)
	I1205 20:31:52.150804  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1205 20:31:52.150834  585025 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.320487562s)
	I1205 20:31:52.150874  585025 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1205 20:31:52.150907  585025 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:52.150943  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:55.858372  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.707687772s)
	I1205 20:31:55.858414  585025 ssh_runner.go:235] Completed: which crictl: (3.707446137s)
	I1205 20:31:55.858498  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:55.858426  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1205 20:31:55.858580  585025 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 20:31:55.858640  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 20:31:55.901375  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:52.219602  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:52.719522  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:53.220117  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:53.720426  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:54.220177  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:54.720100  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:55.219569  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:55.719796  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:56.219490  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:56.720420  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:57.231370  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:31:57.231415  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:57.612431  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": read tcp 192.168.50.1:36198->192.168.50.96:8444: read: connection reset by peer
	I1205 20:31:57.727638  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:57.728368  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": dial tcp 192.168.50.96:8444: connect: connection refused
	I1205 20:31:57.042989  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:59.043517  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:57.843623  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.984954959s)
	I1205 20:31:57.843662  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1205 20:31:57.843683  585025 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 20:31:57.843731  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 20:31:57.843732  585025 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.942323285s)
	I1205 20:31:57.843821  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:32:00.030765  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.186998467s)
	I1205 20:32:00.030810  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1205 20:32:00.030840  585025 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 20:32:00.030846  585025 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.18699947s)
	I1205 20:32:00.030897  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 20:32:00.030906  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 20:32:00.031026  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:31:57.219497  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:57.720337  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:58.219807  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:58.720112  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:59.219949  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:59.719626  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:00.219871  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:00.719466  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:01.219491  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:01.719760  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:58.227807  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:01.044658  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:03.542453  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:05.542887  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:01.486433  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.455500806s)
	I1205 20:32:01.486479  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1205 20:32:01.486512  585025 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1205 20:32:01.486513  585025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.455460879s)
	I1205 20:32:01.486589  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1205 20:32:01.486592  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1205 20:32:03.658906  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.172262326s)
	I1205 20:32:03.658947  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1205 20:32:03.658979  585025 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:32:03.659024  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:32:04.304774  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 20:32:04.304825  585025 cache_images.go:123] Successfully loaded all cached images
	I1205 20:32:04.304832  585025 cache_images.go:92] duration metric: took 15.661840579s to LoadCachedImages
	I1205 20:32:04.304846  585025 kubeadm.go:934] updating node { 192.168.61.37 8443 v1.31.2 crio true true} ...
	I1205 20:32:04.304983  585025 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-816185 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.37
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-816185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:32:04.305057  585025 ssh_runner.go:195] Run: crio config
	I1205 20:32:04.350303  585025 cni.go:84] Creating CNI manager for ""
	I1205 20:32:04.350332  585025 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:32:04.350352  585025 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:32:04.350383  585025 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.37 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-816185 NodeName:no-preload-816185 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.37"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.37 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:32:04.350534  585025 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.37
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-816185"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.37"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.37"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:32:04.350618  585025 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:32:04.362733  585025 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:32:04.362815  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:32:04.374219  585025 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1205 20:32:04.392626  585025 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:32:04.409943  585025 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I1205 20:32:04.428180  585025 ssh_runner.go:195] Run: grep 192.168.61.37	control-plane.minikube.internal$ /etc/hosts
	I1205 20:32:04.432433  585025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.37	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:32:04.447274  585025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:32:04.591755  585025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:32:04.609441  585025 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185 for IP: 192.168.61.37
	I1205 20:32:04.609472  585025 certs.go:194] generating shared ca certs ...
	I1205 20:32:04.609494  585025 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:32:04.609664  585025 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:32:04.609729  585025 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:32:04.609745  585025 certs.go:256] generating profile certs ...
	I1205 20:32:04.609910  585025 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/client.key
	I1205 20:32:04.609991  585025 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/apiserver.key.e9b85612
	I1205 20:32:04.610027  585025 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/proxy-client.key
	I1205 20:32:04.610146  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:32:04.610173  585025 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:32:04.610182  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:32:04.610216  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:32:04.610264  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:32:04.610313  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:32:04.610377  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:32:04.611264  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:32:04.642976  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:32:04.679840  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:32:04.707526  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:32:04.746333  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 20:32:04.782671  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:32:04.819333  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:32:04.845567  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:32:04.870304  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:32:04.894597  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:32:04.918482  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:32:04.942992  585025 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:32:04.960576  585025 ssh_runner.go:195] Run: openssl version
	I1205 20:32:04.966908  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:32:04.978238  585025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:32:04.982959  585025 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:32:04.983023  585025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:32:04.989070  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:32:05.000979  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:32:05.012901  585025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:32:05.017583  585025 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:32:05.018169  585025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:32:05.025450  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:32:05.037419  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:32:05.050366  585025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:32:05.055211  585025 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:32:05.055255  585025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:32:05.061388  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:32:05.074182  585025 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:32:05.079129  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:32:05.085580  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:32:05.091938  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:32:05.099557  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:32:05.105756  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:32:05.112019  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:32:05.118426  585025 kubeadm.go:392] StartCluster: {Name:no-preload-816185 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-816185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.37 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:32:05.118540  585025 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:32:05.118622  585025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:32:05.162731  585025 cri.go:89] found id: ""
	I1205 20:32:05.162821  585025 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:32:05.174100  585025 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 20:32:05.174127  585025 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 20:32:05.174181  585025 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:32:05.184949  585025 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:32:05.186127  585025 kubeconfig.go:125] found "no-preload-816185" server: "https://192.168.61.37:8443"
	I1205 20:32:05.188601  585025 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:32:05.198779  585025 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.37
	I1205 20:32:05.198815  585025 kubeadm.go:1160] stopping kube-system containers ...
	I1205 20:32:05.198828  585025 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:32:05.198881  585025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:32:05.241175  585025 cri.go:89] found id: ""
	I1205 20:32:05.241247  585025 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:32:05.259698  585025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:32:05.270282  585025 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:32:05.270310  585025 kubeadm.go:157] found existing configuration files:
	
	I1205 20:32:05.270370  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:32:05.280440  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:32:05.280519  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:32:05.290825  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:32:05.300680  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:32:05.300745  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:32:05.311108  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:32:05.320854  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:32:05.320918  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:32:05.331099  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:32:05.340948  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:32:05.341017  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:32:05.351280  585025 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:32:05.361567  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:05.477138  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:02.220337  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:02.720145  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:03.219463  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:03.719913  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:04.219813  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:04.719940  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:05.219830  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:05.720324  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:06.220287  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:06.719584  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:03.228372  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:32:03.228433  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:08.042416  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:10.043011  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:06.259256  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:06.483460  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:06.557633  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:06.666782  585025 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:32:06.666885  585025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:07.167840  585025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:07.667069  585025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:07.701559  585025 api_server.go:72] duration metric: took 1.034769472s to wait for apiserver process to appear ...
	I1205 20:32:07.701592  585025 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:32:07.701612  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:10.640462  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:32:10.640498  585025 api_server.go:103] status: https://192.168.61.37:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:32:10.640521  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:10.647093  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:32:10.647118  585025 api_server.go:103] status: https://192.168.61.37:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:32:10.702286  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:10.711497  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:10.711528  585025 api_server.go:103] status: https://192.168.61.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:07.219989  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:07.720289  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:08.220381  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:08.719947  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:09.219838  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:09.719666  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:10.219756  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:10.720312  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:11.220369  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:11.720004  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:11.202247  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:11.206625  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:11.206650  585025 api_server.go:103] status: https://192.168.61.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:11.702760  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:11.718941  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:11.718974  585025 api_server.go:103] status: https://192.168.61.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:12.202567  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:12.207589  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 200:
	ok
	I1205 20:32:12.214275  585025 api_server.go:141] control plane version: v1.31.2
	I1205 20:32:12.214304  585025 api_server.go:131] duration metric: took 4.512704501s to wait for apiserver health ...
	I1205 20:32:12.214314  585025 cni.go:84] Creating CNI manager for ""
	I1205 20:32:12.214321  585025 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:32:12.216193  585025 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:32:08.229499  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:32:08.229544  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:12.545378  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:15.043628  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:12.217640  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:32:12.241907  585025 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:32:12.262114  585025 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:32:12.275246  585025 system_pods.go:59] 8 kube-system pods found
	I1205 20:32:12.275296  585025 system_pods.go:61] "coredns-7c65d6cfc9-j2hr2" [9ce413ab-c304-40dd-af68-80f15db0e2ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:32:12.275308  585025 system_pods.go:61] "etcd-no-preload-816185" [ddc20062-02d9-4f9d-a2fb-fa2c7d6aa1cc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:32:12.275319  585025 system_pods.go:61] "kube-apiserver-no-preload-816185" [07ff76f2-b05e-4434-b8f9-448bc200507a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:32:12.275328  585025 system_pods.go:61] "kube-controller-manager-no-preload-816185" [7c701058-791a-4097-a913-f6989a791067] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:32:12.275340  585025 system_pods.go:61] "kube-proxy-rjp4j" [340e9ccc-0290-4d3d-829c-44ad65410f3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:32:12.275348  585025 system_pods.go:61] "kube-scheduler-no-preload-816185" [c2f3b04c-9e3a-4060-a6d0-fb9eb2aa5e55] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 20:32:12.275359  585025 system_pods.go:61] "metrics-server-6867b74b74-vjwq2" [47ff24fe-0edb-4d06-b280-a0d965b25dae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:32:12.275367  585025 system_pods.go:61] "storage-provisioner" [bd385e87-56ea-417c-a4a8-b8a6e4f94114] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:32:12.275376  585025 system_pods.go:74] duration metric: took 13.23725ms to wait for pod list to return data ...
	I1205 20:32:12.275387  585025 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:32:12.279719  585025 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:32:12.279746  585025 node_conditions.go:123] node cpu capacity is 2
	I1205 20:32:12.279755  585025 node_conditions.go:105] duration metric: took 4.364464ms to run NodePressure ...
	I1205 20:32:12.279774  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:12.562221  585025 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 20:32:12.566599  585025 kubeadm.go:739] kubelet initialised
	I1205 20:32:12.566627  585025 kubeadm.go:740] duration metric: took 4.374855ms waiting for restarted kubelet to initialise ...
	I1205 20:32:12.566639  585025 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:32:12.571780  585025 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-j2hr2" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:14.579614  585025 pod_ready.go:103] pod "coredns-7c65d6cfc9-j2hr2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:12.220304  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:12.720348  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:13.219553  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:13.720078  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:14.219614  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:14.719625  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:15.220118  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:15.720577  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:16.220392  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:16.719538  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:13.230519  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:32:13.230567  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:16.061543  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:32:16.061583  585929 api_server.go:103] status: https://192.168.50.96:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:32:16.061603  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:16.078424  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:32:16.078457  585929 api_server.go:103] status: https://192.168.50.96:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:32:16.227852  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:16.553664  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:16.553705  585929 api_server.go:103] status: https://192.168.50.96:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:16.728155  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:16.734800  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:16.734853  585929 api_server.go:103] status: https://192.168.50.96:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:17.228013  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:17.233541  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:17.233577  585929 api_server.go:103] status: https://192.168.50.96:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:17.727878  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:17.736731  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 200:
	ok
	I1205 20:32:17.746474  585929 api_server.go:141] control plane version: v1.31.2
	I1205 20:32:17.746511  585929 api_server.go:131] duration metric: took 41.019245279s to wait for apiserver health ...
	I1205 20:32:17.746523  585929 cni.go:84] Creating CNI manager for ""
	I1205 20:32:17.746531  585929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:32:17.748464  585929 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:32:17.750113  585929 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:32:17.762750  585929 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:32:17.786421  585929 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:32:17.826859  585929 system_pods.go:59] 8 kube-system pods found
	I1205 20:32:17.826918  585929 system_pods.go:61] "coredns-7c65d6cfc9-5drgc" [4adbcbc8-0974-4ed3-90d4-fc7f75ff83b6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:32:17.826934  585929 system_pods.go:61] "etcd-default-k8s-diff-port-942599" [4041a965-abf4-45b3-a180-118601e72573] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:32:17.826946  585929 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-942599" [ae1d7788-4feb-4e02-b0b2-bcaff984ff99] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:32:17.826959  585929 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-942599" [5cfb734e-5a10-4066-95a1-b884817a0aea] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:32:17.826969  585929 system_pods.go:61] "kube-proxy-5vdcq" [be2e18fd-6980-45c9-87a4-f6d1ed31bf7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:32:17.826980  585929 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-942599" [8deda727-a6c3-4523-8755-76217f6a8ddb] Running
	I1205 20:32:17.826989  585929 system_pods.go:61] "metrics-server-6867b74b74-rq8xm" [99b577fd-fbfd-4178-8b06-ef96f118c30b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:32:17.827000  585929 system_pods.go:61] "storage-provisioner" [8a858ec2-dc10-4501-8efa-72e2ea0c7927] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:32:17.827010  585929 system_pods.go:74] duration metric: took 40.565274ms to wait for pod list to return data ...
	I1205 20:32:17.827025  585929 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:32:17.838000  585929 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:32:17.838034  585929 node_conditions.go:123] node cpu capacity is 2
	I1205 20:32:17.838050  585929 node_conditions.go:105] duration metric: took 11.010352ms to run NodePressure ...
	I1205 20:32:17.838075  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:18.215713  585929 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 20:32:18.222162  585929 kubeadm.go:739] kubelet initialised
	I1205 20:32:18.222187  585929 kubeadm.go:740] duration metric: took 6.444578ms waiting for restarted kubelet to initialise ...
	I1205 20:32:18.222199  585929 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:32:18.226988  585929 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:18.235570  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.235600  585929 pod_ready.go:82] duration metric: took 8.582972ms for pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:18.235609  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.235617  585929 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:18.242596  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.242623  585929 pod_ready.go:82] duration metric: took 6.99814ms for pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:18.242634  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.242642  585929 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:18.248351  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.248373  585929 pod_ready.go:82] duration metric: took 5.725371ms for pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:18.248383  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.248390  585929 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:18.258151  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.258174  585929 pod_ready.go:82] duration metric: took 9.778119ms for pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:18.258183  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.258190  585929 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5vdcq" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:18.619579  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "kube-proxy-5vdcq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.619623  585929 pod_ready.go:82] duration metric: took 361.426091ms for pod "kube-proxy-5vdcq" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:18.619638  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "kube-proxy-5vdcq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.619649  585929 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:19.019623  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:19.019655  585929 pod_ready.go:82] duration metric: took 399.997558ms for pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:19.019669  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:19.019676  585929 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:19.420201  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:19.420228  585929 pod_ready.go:82] duration metric: took 400.54576ms for pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:19.420242  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:19.420251  585929 pod_ready.go:39] duration metric: took 1.198040831s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:32:19.420292  585929 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:32:19.434385  585929 ops.go:34] apiserver oom_adj: -16
	I1205 20:32:19.434420  585929 kubeadm.go:597] duration metric: took 45.406934122s to restartPrimaryControlPlane
	I1205 20:32:19.434434  585929 kubeadm.go:394] duration metric: took 45.464483994s to StartCluster
	I1205 20:32:19.434460  585929 settings.go:142] acquiring lock: {Name:mk53b9e6d652790a330d8f10370186624dd74692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:32:19.434560  585929 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:32:19.436299  585929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:32:19.436590  585929 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.96 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:32:19.436736  585929 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 20:32:19.436837  585929 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-942599"
	I1205 20:32:19.436858  585929 config.go:182] Loaded profile config "default-k8s-diff-port-942599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:32:19.436873  585929 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-942599"
	W1205 20:32:19.436883  585929 addons.go:243] addon storage-provisioner should already be in state true
	I1205 20:32:19.436923  585929 host.go:66] Checking if "default-k8s-diff-port-942599" exists ...
	I1205 20:32:19.436938  585929 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-942599"
	I1205 20:32:19.436974  585929 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-942599"
	I1205 20:32:19.436922  585929 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-942599"
	I1205 20:32:19.437024  585929 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-942599"
	W1205 20:32:19.437051  585929 addons.go:243] addon metrics-server should already be in state true
	I1205 20:32:19.437090  585929 host.go:66] Checking if "default-k8s-diff-port-942599" exists ...
	I1205 20:32:19.437365  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.437407  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.437452  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.437480  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.437509  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.437514  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.438584  585929 out.go:177] * Verifying Kubernetes components...
	I1205 20:32:19.440376  585929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:32:19.453761  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I1205 20:32:19.453782  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44087
	I1205 20:32:19.453767  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33855
	I1205 20:32:19.454289  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.454441  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.454451  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.454851  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.454871  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.454981  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.454981  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.455005  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.455021  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.455286  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.455350  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.455409  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.455461  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetState
	I1205 20:32:19.455910  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.455927  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.455958  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.455966  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.458587  585929 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-942599"
	W1205 20:32:19.458605  585929 addons.go:243] addon default-storageclass should already be in state true
	I1205 20:32:19.458627  585929 host.go:66] Checking if "default-k8s-diff-port-942599" exists ...
	I1205 20:32:19.458955  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.458995  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.472175  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37545
	I1205 20:32:19.472667  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.472927  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37223
	I1205 20:32:19.473215  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.473233  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.473401  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40929
	I1205 20:32:19.473570  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.473608  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.473839  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.473933  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetState
	I1205 20:32:19.474155  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.474187  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.474290  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.474313  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.474546  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.474638  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.474711  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetState
	I1205 20:32:19.475267  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.475320  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.476105  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:32:19.476447  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:32:19.478117  585929 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:32:19.478117  585929 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:32:17.545165  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:20.044285  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:17.079986  585025 pod_ready.go:93] pod "coredns-7c65d6cfc9-j2hr2" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:17.080014  585025 pod_ready.go:82] duration metric: took 4.508210865s for pod "coredns-7c65d6cfc9-j2hr2" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:17.080025  585025 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:19.086070  585025 pod_ready.go:103] pod "etcd-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:20.587742  585025 pod_ready.go:93] pod "etcd-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:20.587775  585025 pod_ready.go:82] duration metric: took 3.507742173s for pod "etcd-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:20.587789  585025 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:19.479638  585929 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:32:19.479658  585929 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:32:19.479686  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:32:19.479719  585929 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:32:19.479737  585929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:32:19.479750  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:32:19.483208  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.483350  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.483773  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:32:19.483790  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.483873  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:32:19.483887  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.483936  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:32:19.484123  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:32:19.484166  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:32:19.484294  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:32:19.484324  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:32:19.484438  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:32:19.484456  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:32:19.484571  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:32:19.533651  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34539
	I1205 20:32:19.534273  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.534802  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.534833  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.535282  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.535535  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetState
	I1205 20:32:19.538221  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:32:19.538787  585929 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:32:19.538804  585929 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:32:19.538825  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:32:19.541876  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.542318  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:32:19.542354  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.542556  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:32:19.542744  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:32:19.542944  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:32:19.543129  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:32:19.630282  585929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:32:19.652591  585929 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-942599" to be "Ready" ...
	I1205 20:32:19.719058  585929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:32:19.810931  585929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:32:19.812113  585929 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:32:19.812136  585929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:32:19.875725  585929 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:32:19.875761  585929 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:32:19.946353  585929 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:32:19.946390  585929 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:32:20.010445  585929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:32:20.231055  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:20.231082  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:20.231425  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:20.231454  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:20.231469  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:20.231478  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:20.231476  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Closing plugin on server side
	I1205 20:32:20.231764  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:20.231784  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:20.231783  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Closing plugin on server side
	I1205 20:32:20.247021  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:20.247051  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:20.247463  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:20.247490  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:20.247488  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Closing plugin on server side
	I1205 20:32:21.074948  585929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.263976727s)
	I1205 20:32:21.075015  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:21.075029  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:21.075397  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:21.075438  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:21.075449  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:21.075457  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:21.075745  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Closing plugin on server side
	I1205 20:32:21.075766  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:21.075785  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:21.134215  585929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.123724822s)
	I1205 20:32:21.134271  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:21.134285  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:21.134588  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:21.134604  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:21.134612  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:21.134615  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Closing plugin on server side
	I1205 20:32:21.134620  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:21.134878  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:21.134891  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:21.134904  585929 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-942599"
	I1205 20:32:21.136817  585929 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:32:17.220437  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:17.220539  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:17.272666  585602 cri.go:89] found id: ""
	I1205 20:32:17.272702  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.272716  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:17.272723  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:17.272797  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:17.314947  585602 cri.go:89] found id: ""
	I1205 20:32:17.314977  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.314989  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:17.314996  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:17.315061  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:17.354511  585602 cri.go:89] found id: ""
	I1205 20:32:17.354548  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.354561  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:17.354571  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:17.354640  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:17.393711  585602 cri.go:89] found id: ""
	I1205 20:32:17.393745  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.393759  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:17.393768  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:17.393836  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:17.434493  585602 cri.go:89] found id: ""
	I1205 20:32:17.434526  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.434535  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:17.434541  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:17.434602  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:17.476201  585602 cri.go:89] found id: ""
	I1205 20:32:17.476235  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.476245  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:17.476253  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:17.476341  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:17.516709  585602 cri.go:89] found id: ""
	I1205 20:32:17.516745  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.516755  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:17.516762  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:17.516818  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:17.557270  585602 cri.go:89] found id: ""
	I1205 20:32:17.557305  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.557314  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:17.557324  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:17.557348  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:17.606494  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:17.606540  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:17.681372  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:17.681412  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:17.696778  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:17.696816  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:17.839655  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:17.839679  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:17.839717  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:20.423552  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:20.439794  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:20.439875  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:20.482820  585602 cri.go:89] found id: ""
	I1205 20:32:20.482866  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.482880  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:20.482888  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:20.482958  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:20.523590  585602 cri.go:89] found id: ""
	I1205 20:32:20.523629  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.523641  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:20.523649  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:20.523727  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:20.601603  585602 cri.go:89] found id: ""
	I1205 20:32:20.601638  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.601648  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:20.601656  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:20.601728  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:20.643927  585602 cri.go:89] found id: ""
	I1205 20:32:20.643959  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.643972  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:20.643981  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:20.644054  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:20.690935  585602 cri.go:89] found id: ""
	I1205 20:32:20.690964  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.690975  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:20.690984  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:20.691054  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:20.728367  585602 cri.go:89] found id: ""
	I1205 20:32:20.728400  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.728412  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:20.728420  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:20.728489  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:20.766529  585602 cri.go:89] found id: ""
	I1205 20:32:20.766562  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.766571  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:20.766578  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:20.766657  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:20.805641  585602 cri.go:89] found id: ""
	I1205 20:32:20.805680  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.805690  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:20.805701  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:20.805718  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:20.884460  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:20.884495  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:20.884514  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:20.998367  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:20.998429  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:21.041210  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:21.041247  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:21.103519  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:21.103557  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:21.138175  585929 addons.go:510] duration metric: took 1.701453382s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:32:21.657269  585929 node_ready.go:53] node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:22.541880  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:24.543481  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:22.595422  585025 pod_ready.go:103] pod "kube-apiserver-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:23.594392  585025 pod_ready.go:93] pod "kube-apiserver-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:23.594419  585025 pod_ready.go:82] duration metric: took 3.006622534s for pod "kube-apiserver-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:23.594430  585025 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:25.601616  585025 pod_ready.go:103] pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:23.619187  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:23.633782  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:23.633872  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:23.679994  585602 cri.go:89] found id: ""
	I1205 20:32:23.680023  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.680032  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:23.680038  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:23.680094  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:23.718362  585602 cri.go:89] found id: ""
	I1205 20:32:23.718425  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.718439  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:23.718447  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:23.718520  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:23.758457  585602 cri.go:89] found id: ""
	I1205 20:32:23.758491  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.758500  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:23.758506  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:23.758558  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:23.794612  585602 cri.go:89] found id: ""
	I1205 20:32:23.794649  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.794662  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:23.794671  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:23.794738  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:23.832309  585602 cri.go:89] found id: ""
	I1205 20:32:23.832341  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.832354  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:23.832361  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:23.832421  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:23.868441  585602 cri.go:89] found id: ""
	I1205 20:32:23.868472  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.868484  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:23.868492  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:23.868573  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:23.902996  585602 cri.go:89] found id: ""
	I1205 20:32:23.903025  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.903036  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:23.903050  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:23.903115  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:23.939830  585602 cri.go:89] found id: ""
	I1205 20:32:23.939865  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.939879  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:23.939892  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:23.939909  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:23.992310  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:23.992354  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:24.007378  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:24.007414  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:24.077567  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:24.077594  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:24.077608  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:24.165120  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:24.165163  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:26.711674  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:26.726923  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:26.727008  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:26.763519  585602 cri.go:89] found id: ""
	I1205 20:32:26.763554  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.763563  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:26.763570  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:26.763628  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:26.802600  585602 cri.go:89] found id: ""
	I1205 20:32:26.802635  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.802644  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:26.802650  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:26.802705  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:26.839920  585602 cri.go:89] found id: ""
	I1205 20:32:26.839967  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.839981  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:26.839989  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:26.840076  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:24.157515  585929 node_ready.go:53] node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:26.657197  585929 node_ready.go:53] node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:27.656811  585929 node_ready.go:49] node "default-k8s-diff-port-942599" has status "Ready":"True"
	I1205 20:32:27.656842  585929 node_ready.go:38] duration metric: took 8.004215314s for node "default-k8s-diff-port-942599" to be "Ready" ...
	I1205 20:32:27.656854  585929 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:32:27.662792  585929 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.668485  585929 pod_ready.go:93] pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:27.668510  585929 pod_ready.go:82] duration metric: took 5.690516ms for pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.668521  585929 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:26.543536  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:28.544214  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:27.101514  585025 pod_ready.go:93] pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:27.101540  585025 pod_ready.go:82] duration metric: took 3.507102769s for pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.101551  585025 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rjp4j" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.108084  585025 pod_ready.go:93] pod "kube-proxy-rjp4j" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:27.108116  585025 pod_ready.go:82] duration metric: took 6.557141ms for pod "kube-proxy-rjp4j" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.108131  585025 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.112915  585025 pod_ready.go:93] pod "kube-scheduler-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:27.112942  585025 pod_ready.go:82] duration metric: took 4.801285ms for pod "kube-scheduler-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.112955  585025 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.119094  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:26.876377  585602 cri.go:89] found id: ""
	I1205 20:32:26.876406  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.876416  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:26.876422  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:26.876491  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:26.913817  585602 cri.go:89] found id: ""
	I1205 20:32:26.913845  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.913854  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:26.913862  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:26.913936  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:26.955739  585602 cri.go:89] found id: ""
	I1205 20:32:26.955775  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.955788  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:26.955798  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:26.955863  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:26.996191  585602 cri.go:89] found id: ""
	I1205 20:32:26.996223  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.996234  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:26.996242  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:26.996341  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:27.040905  585602 cri.go:89] found id: ""
	I1205 20:32:27.040935  585602 logs.go:282] 0 containers: []
	W1205 20:32:27.040947  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:27.040958  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:27.040973  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:27.098103  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:27.098140  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:27.116538  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:27.116574  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:27.204154  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:27.204187  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:27.204208  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:27.300380  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:27.300431  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:29.840944  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:29.855784  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:29.855869  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:29.893728  585602 cri.go:89] found id: ""
	I1205 20:32:29.893765  585602 logs.go:282] 0 containers: []
	W1205 20:32:29.893777  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:29.893786  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:29.893867  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:29.930138  585602 cri.go:89] found id: ""
	I1205 20:32:29.930176  585602 logs.go:282] 0 containers: []
	W1205 20:32:29.930186  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:29.930193  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:29.930248  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:29.966340  585602 cri.go:89] found id: ""
	I1205 20:32:29.966371  585602 logs.go:282] 0 containers: []
	W1205 20:32:29.966380  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:29.966387  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:29.966463  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:30.003868  585602 cri.go:89] found id: ""
	I1205 20:32:30.003900  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.003920  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:30.003928  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:30.004001  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:30.044332  585602 cri.go:89] found id: ""
	I1205 20:32:30.044363  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.044373  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:30.044380  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:30.044445  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:30.088044  585602 cri.go:89] found id: ""
	I1205 20:32:30.088085  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.088098  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:30.088106  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:30.088173  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:30.124221  585602 cri.go:89] found id: ""
	I1205 20:32:30.124248  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.124258  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:30.124285  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:30.124357  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:30.162092  585602 cri.go:89] found id: ""
	I1205 20:32:30.162121  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.162133  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:30.162146  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:30.162162  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:30.218526  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:30.218567  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:30.232240  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:30.232292  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:30.308228  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:30.308260  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:30.308296  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:30.389348  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:30.389391  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:29.177093  585929 pod_ready.go:93] pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:29.177118  585929 pod_ready.go:82] duration metric: took 1.508590352s for pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.177129  585929 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.185839  585929 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:29.185869  585929 pod_ready.go:82] duration metric: took 8.733028ms for pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.185883  585929 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.191924  585929 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:29.191950  585929 pod_ready.go:82] duration metric: took 6.059525ms for pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.191963  585929 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5vdcq" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.256484  585929 pod_ready.go:93] pod "kube-proxy-5vdcq" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:29.256510  585929 pod_ready.go:82] duration metric: took 64.540117ms for pod "kube-proxy-5vdcq" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.256521  585929 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.656933  585929 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:29.656961  585929 pod_ready.go:82] duration metric: took 400.432279ms for pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.656972  585929 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:31.664326  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:31.043630  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:33.044035  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:35.542861  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:31.120200  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:33.120303  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:35.120532  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:32.934497  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:32.949404  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:32.949488  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:33.006117  585602 cri.go:89] found id: ""
	I1205 20:32:33.006148  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.006157  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:33.006163  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:33.006231  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:33.064907  585602 cri.go:89] found id: ""
	I1205 20:32:33.064945  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.064958  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:33.064966  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:33.065031  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:33.101268  585602 cri.go:89] found id: ""
	I1205 20:32:33.101295  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.101304  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:33.101310  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:33.101378  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:33.141705  585602 cri.go:89] found id: ""
	I1205 20:32:33.141733  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.141743  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:33.141750  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:33.141810  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:33.180983  585602 cri.go:89] found id: ""
	I1205 20:32:33.181011  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.181020  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:33.181026  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:33.181086  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:33.220742  585602 cri.go:89] found id: ""
	I1205 20:32:33.220779  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.220791  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:33.220799  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:33.220871  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:33.255980  585602 cri.go:89] found id: ""
	I1205 20:32:33.256009  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.256017  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:33.256024  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:33.256080  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:33.292978  585602 cri.go:89] found id: ""
	I1205 20:32:33.293005  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.293013  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:33.293023  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:33.293034  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:33.347167  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:33.347213  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:33.361367  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:33.361408  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:33.435871  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:33.435915  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:33.435932  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:33.518835  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:33.518880  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:36.066359  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:36.080867  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:36.080947  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:36.117647  585602 cri.go:89] found id: ""
	I1205 20:32:36.117678  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.117689  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:36.117697  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:36.117763  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:36.154376  585602 cri.go:89] found id: ""
	I1205 20:32:36.154412  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.154428  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:36.154436  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:36.154498  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:36.193225  585602 cri.go:89] found id: ""
	I1205 20:32:36.193261  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.193274  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:36.193282  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:36.193347  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:36.230717  585602 cri.go:89] found id: ""
	I1205 20:32:36.230748  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.230758  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:36.230764  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:36.230817  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:36.270186  585602 cri.go:89] found id: ""
	I1205 20:32:36.270238  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.270252  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:36.270262  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:36.270340  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:36.306378  585602 cri.go:89] found id: ""
	I1205 20:32:36.306425  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.306438  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:36.306447  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:36.306531  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:36.342256  585602 cri.go:89] found id: ""
	I1205 20:32:36.342289  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.342300  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:36.342306  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:36.342380  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:36.380684  585602 cri.go:89] found id: ""
	I1205 20:32:36.380718  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.380732  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:36.380745  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:36.380768  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:36.436066  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:36.436109  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:36.450255  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:36.450285  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:36.521857  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:36.521883  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:36.521897  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:36.608349  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:36.608395  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:34.163870  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:36.164890  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:38.042889  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:40.543140  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:37.619863  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:40.120462  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:39.157366  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:39.171267  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:39.171357  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:39.214459  585602 cri.go:89] found id: ""
	I1205 20:32:39.214490  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.214520  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:39.214528  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:39.214583  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:39.250312  585602 cri.go:89] found id: ""
	I1205 20:32:39.250352  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.250366  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:39.250375  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:39.250437  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:39.286891  585602 cri.go:89] found id: ""
	I1205 20:32:39.286932  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.286944  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:39.286952  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:39.287019  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:39.323923  585602 cri.go:89] found id: ""
	I1205 20:32:39.323958  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.323970  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:39.323979  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:39.324053  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:39.360280  585602 cri.go:89] found id: ""
	I1205 20:32:39.360322  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.360331  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:39.360337  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:39.360403  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:39.397599  585602 cri.go:89] found id: ""
	I1205 20:32:39.397637  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.397650  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:39.397659  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:39.397731  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:39.435132  585602 cri.go:89] found id: ""
	I1205 20:32:39.435159  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.435168  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:39.435174  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:39.435241  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:39.470653  585602 cri.go:89] found id: ""
	I1205 20:32:39.470682  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.470690  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:39.470700  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:39.470714  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:39.511382  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:39.511413  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:39.563955  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:39.563994  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:39.578015  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:39.578044  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:39.658505  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:39.658535  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:39.658550  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:38.665320  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:41.165054  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:42.545231  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:45.042231  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:42.620687  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:45.120915  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:42.248607  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:42.263605  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:42.263688  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:42.305480  585602 cri.go:89] found id: ""
	I1205 20:32:42.305508  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.305519  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:42.305527  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:42.305595  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:42.339969  585602 cri.go:89] found id: ""
	I1205 20:32:42.340001  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.340010  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:42.340016  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:42.340090  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:42.381594  585602 cri.go:89] found id: ""
	I1205 20:32:42.381630  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.381643  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:42.381651  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:42.381771  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:42.435039  585602 cri.go:89] found id: ""
	I1205 20:32:42.435072  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.435085  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:42.435093  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:42.435162  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:42.470567  585602 cri.go:89] found id: ""
	I1205 20:32:42.470595  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.470604  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:42.470610  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:42.470674  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:42.510695  585602 cri.go:89] found id: ""
	I1205 20:32:42.510723  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.510731  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:42.510738  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:42.510793  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:42.547687  585602 cri.go:89] found id: ""
	I1205 20:32:42.547711  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.547718  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:42.547735  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:42.547784  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:42.587160  585602 cri.go:89] found id: ""
	I1205 20:32:42.587191  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.587199  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:42.587211  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:42.587225  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:42.669543  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:42.669587  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:42.717795  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:42.717833  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:42.772644  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:42.772696  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:42.788443  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:42.788480  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:42.861560  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:45.362758  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:45.377178  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:45.377266  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:45.413055  585602 cri.go:89] found id: ""
	I1205 20:32:45.413088  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.413102  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:45.413111  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:45.413176  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:45.453769  585602 cri.go:89] found id: ""
	I1205 20:32:45.453799  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.453808  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:45.453813  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:45.453879  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:45.499481  585602 cri.go:89] found id: ""
	I1205 20:32:45.499511  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.499522  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:45.499531  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:45.499598  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:45.537603  585602 cri.go:89] found id: ""
	I1205 20:32:45.537638  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.537647  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:45.537653  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:45.537707  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:45.572430  585602 cri.go:89] found id: ""
	I1205 20:32:45.572463  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.572471  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:45.572479  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:45.572556  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:45.610349  585602 cri.go:89] found id: ""
	I1205 20:32:45.610387  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.610398  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:45.610406  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:45.610476  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:45.649983  585602 cri.go:89] found id: ""
	I1205 20:32:45.650018  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.650031  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:45.650038  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:45.650113  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:45.689068  585602 cri.go:89] found id: ""
	I1205 20:32:45.689099  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.689107  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:45.689118  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:45.689131  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:45.743715  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:45.743758  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:45.759803  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:45.759834  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:45.835107  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:45.835133  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:45.835146  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:45.914590  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:45.914632  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:43.665616  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:46.164064  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:47.045269  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:49.544519  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:47.619099  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:49.627948  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:48.456633  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:48.475011  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:48.475086  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:48.512878  585602 cri.go:89] found id: ""
	I1205 20:32:48.512913  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.512925  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:48.512933  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:48.513002  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:48.551708  585602 cri.go:89] found id: ""
	I1205 20:32:48.551737  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.551744  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:48.551751  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:48.551805  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:48.590765  585602 cri.go:89] found id: ""
	I1205 20:32:48.590791  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.590800  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:48.590806  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:48.590859  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:48.629447  585602 cri.go:89] found id: ""
	I1205 20:32:48.629473  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.629481  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:48.629487  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:48.629540  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:48.667299  585602 cri.go:89] found id: ""
	I1205 20:32:48.667329  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.667339  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:48.667347  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:48.667414  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:48.703771  585602 cri.go:89] found id: ""
	I1205 20:32:48.703816  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.703830  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:48.703841  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:48.703911  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:48.747064  585602 cri.go:89] found id: ""
	I1205 20:32:48.747098  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.747111  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:48.747118  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:48.747186  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:48.786608  585602 cri.go:89] found id: ""
	I1205 20:32:48.786649  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.786663  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:48.786684  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:48.786700  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:48.860834  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:48.860866  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:48.860881  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:48.944029  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:48.944082  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:48.982249  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:48.982284  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:49.036460  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:49.036509  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:51.556456  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:51.571498  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:51.571590  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:51.616890  585602 cri.go:89] found id: ""
	I1205 20:32:51.616924  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.616934  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:51.616942  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:51.617008  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:51.660397  585602 cri.go:89] found id: ""
	I1205 20:32:51.660433  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.660445  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:51.660453  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:51.660543  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:51.698943  585602 cri.go:89] found id: ""
	I1205 20:32:51.698973  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.698981  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:51.698988  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:51.699041  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:51.737254  585602 cri.go:89] found id: ""
	I1205 20:32:51.737288  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.737297  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:51.737310  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:51.737366  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:51.775560  585602 cri.go:89] found id: ""
	I1205 20:32:51.775592  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.775600  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:51.775606  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:51.775681  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:51.814314  585602 cri.go:89] found id: ""
	I1205 20:32:51.814370  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.814383  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:51.814393  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:51.814464  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:51.849873  585602 cri.go:89] found id: ""
	I1205 20:32:51.849913  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.849935  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:51.849944  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:51.850018  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:48.164562  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:50.664498  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:52.044224  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:54.542721  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:52.118857  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:54.120231  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:51.891360  585602 cri.go:89] found id: ""
	I1205 20:32:51.891388  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.891400  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:51.891412  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:51.891429  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:51.943812  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:51.943854  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:51.959119  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:51.959152  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:52.036014  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:52.036040  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:52.036059  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:52.114080  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:52.114122  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:54.657243  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:54.672319  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:54.672407  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:54.708446  585602 cri.go:89] found id: ""
	I1205 20:32:54.708475  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.708484  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:54.708491  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:54.708569  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:54.747309  585602 cri.go:89] found id: ""
	I1205 20:32:54.747347  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.747359  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:54.747370  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:54.747451  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:54.790742  585602 cri.go:89] found id: ""
	I1205 20:32:54.790772  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.790781  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:54.790787  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:54.790853  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:54.828857  585602 cri.go:89] found id: ""
	I1205 20:32:54.828885  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.828894  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:54.828902  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:54.828964  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:54.867691  585602 cri.go:89] found id: ""
	I1205 20:32:54.867729  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.867740  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:54.867747  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:54.867819  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:54.907216  585602 cri.go:89] found id: ""
	I1205 20:32:54.907242  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.907249  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:54.907256  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:54.907308  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:54.945800  585602 cri.go:89] found id: ""
	I1205 20:32:54.945827  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.945837  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:54.945844  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:54.945895  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:54.993176  585602 cri.go:89] found id: ""
	I1205 20:32:54.993216  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.993228  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:54.993242  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:54.993258  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:55.045797  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:55.045835  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:55.060103  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:55.060136  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:55.129440  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:55.129467  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:55.129485  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:55.214949  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:55.214999  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:53.164619  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:55.663605  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:56.543148  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:58.543374  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:00.543687  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:56.620220  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:58.620759  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:00.626643  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:57.755086  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:57.769533  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:57.769622  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:57.807812  585602 cri.go:89] found id: ""
	I1205 20:32:57.807847  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.807858  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:57.807869  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:57.807941  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:57.846179  585602 cri.go:89] found id: ""
	I1205 20:32:57.846209  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.846223  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:57.846232  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:57.846305  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:57.881438  585602 cri.go:89] found id: ""
	I1205 20:32:57.881473  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.881482  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:57.881496  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:57.881553  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:57.918242  585602 cri.go:89] found id: ""
	I1205 20:32:57.918283  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.918294  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:57.918302  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:57.918378  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:57.962825  585602 cri.go:89] found id: ""
	I1205 20:32:57.962863  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.962873  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:57.962879  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:57.962955  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:58.004655  585602 cri.go:89] found id: ""
	I1205 20:32:58.004699  585602 logs.go:282] 0 containers: []
	W1205 20:32:58.004711  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:58.004731  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:58.004802  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:58.043701  585602 cri.go:89] found id: ""
	I1205 20:32:58.043730  585602 logs.go:282] 0 containers: []
	W1205 20:32:58.043738  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:58.043744  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:58.043802  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:58.081400  585602 cri.go:89] found id: ""
	I1205 20:32:58.081437  585602 logs.go:282] 0 containers: []
	W1205 20:32:58.081450  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:58.081463  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:58.081486  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:58.135531  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:58.135573  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:58.149962  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:58.149998  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:58.227810  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:58.227834  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:58.227849  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:58.308173  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:58.308219  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:00.848019  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:00.863423  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:00.863496  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:00.902526  585602 cri.go:89] found id: ""
	I1205 20:33:00.902553  585602 logs.go:282] 0 containers: []
	W1205 20:33:00.902561  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:00.902567  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:00.902621  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:00.939891  585602 cri.go:89] found id: ""
	I1205 20:33:00.939932  585602 logs.go:282] 0 containers: []
	W1205 20:33:00.939942  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:00.939948  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:00.940022  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:00.981645  585602 cri.go:89] found id: ""
	I1205 20:33:00.981676  585602 logs.go:282] 0 containers: []
	W1205 20:33:00.981684  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:00.981691  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:00.981745  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:01.027753  585602 cri.go:89] found id: ""
	I1205 20:33:01.027780  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.027789  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:01.027795  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:01.027877  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:01.064529  585602 cri.go:89] found id: ""
	I1205 20:33:01.064559  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.064567  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:01.064574  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:01.064628  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:01.102239  585602 cri.go:89] found id: ""
	I1205 20:33:01.102272  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.102281  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:01.102287  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:01.102357  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:01.139723  585602 cri.go:89] found id: ""
	I1205 20:33:01.139760  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.139770  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:01.139778  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:01.139845  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:01.176172  585602 cri.go:89] found id: ""
	I1205 20:33:01.176198  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.176207  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:01.176216  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:01.176231  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:01.230085  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:01.230133  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:01.245574  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:01.245617  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:01.340483  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:01.340520  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:01.340537  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:01.416925  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:01.416972  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:58.164852  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:00.664376  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:02.677134  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:03.042415  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:05.543101  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:03.119783  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:05.120647  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:03.958855  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:03.974024  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:03.974096  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:04.021407  585602 cri.go:89] found id: ""
	I1205 20:33:04.021442  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.021451  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:04.021458  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:04.021523  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:04.063385  585602 cri.go:89] found id: ""
	I1205 20:33:04.063414  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.063423  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:04.063430  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:04.063488  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:04.103693  585602 cri.go:89] found id: ""
	I1205 20:33:04.103735  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.103747  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:04.103756  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:04.103815  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:04.143041  585602 cri.go:89] found id: ""
	I1205 20:33:04.143072  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.143100  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:04.143109  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:04.143179  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:04.180668  585602 cri.go:89] found id: ""
	I1205 20:33:04.180702  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.180712  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:04.180718  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:04.180778  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:04.221848  585602 cri.go:89] found id: ""
	I1205 20:33:04.221885  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.221894  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:04.221901  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:04.222018  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:04.263976  585602 cri.go:89] found id: ""
	I1205 20:33:04.264014  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.264024  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:04.264030  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:04.264097  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:04.298698  585602 cri.go:89] found id: ""
	I1205 20:33:04.298726  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.298737  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:04.298751  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:04.298767  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:04.347604  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:04.347659  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:04.361325  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:04.361361  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:04.437679  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:04.437704  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:04.437720  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:04.520043  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:04.520103  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:05.163317  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:07.165936  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:08.043365  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:10.544442  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:07.122134  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:09.620228  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:07.070687  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:07.085290  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:07.085367  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:07.126233  585602 cri.go:89] found id: ""
	I1205 20:33:07.126265  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.126276  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:07.126285  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:07.126346  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:07.163004  585602 cri.go:89] found id: ""
	I1205 20:33:07.163040  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.163053  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:07.163061  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:07.163126  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:07.201372  585602 cri.go:89] found id: ""
	I1205 20:33:07.201412  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.201425  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:07.201435  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:07.201509  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:07.237762  585602 cri.go:89] found id: ""
	I1205 20:33:07.237795  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.237807  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:07.237815  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:07.237885  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:07.273940  585602 cri.go:89] found id: ""
	I1205 20:33:07.273976  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.273985  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:07.273995  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:07.274057  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:07.311028  585602 cri.go:89] found id: ""
	I1205 20:33:07.311061  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.311070  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:07.311076  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:07.311131  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:07.347386  585602 cri.go:89] found id: ""
	I1205 20:33:07.347422  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.347433  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:07.347441  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:07.347503  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:07.386412  585602 cri.go:89] found id: ""
	I1205 20:33:07.386446  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.386458  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:07.386471  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:07.386489  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:07.430250  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:07.430280  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:07.483936  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:07.483982  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:07.498201  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:07.498236  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:07.576741  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:07.576767  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:07.576780  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:10.164792  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:10.178516  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:10.178596  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:10.215658  585602 cri.go:89] found id: ""
	I1205 20:33:10.215692  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.215702  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:10.215711  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:10.215779  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:10.251632  585602 cri.go:89] found id: ""
	I1205 20:33:10.251671  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.251683  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:10.251691  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:10.251763  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:10.295403  585602 cri.go:89] found id: ""
	I1205 20:33:10.295435  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.295453  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:10.295460  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:10.295513  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:10.329747  585602 cri.go:89] found id: ""
	I1205 20:33:10.329778  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.329787  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:10.329793  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:10.329871  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:10.369975  585602 cri.go:89] found id: ""
	I1205 20:33:10.370016  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.370028  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:10.370036  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:10.370104  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:10.408146  585602 cri.go:89] found id: ""
	I1205 20:33:10.408183  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.408196  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:10.408204  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:10.408288  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:10.443803  585602 cri.go:89] found id: ""
	I1205 20:33:10.443839  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.443850  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:10.443858  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:10.443932  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:10.481784  585602 cri.go:89] found id: ""
	I1205 20:33:10.481826  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.481840  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:10.481854  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:10.481872  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:10.531449  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:10.531498  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:10.549258  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:10.549288  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:10.620162  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:10.620189  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:10.620206  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:10.704656  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:10.704706  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:09.663940  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:12.163534  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:13.043720  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:15.542736  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:12.118781  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:14.619996  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:13.251518  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:13.264731  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:13.264815  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:13.297816  585602 cri.go:89] found id: ""
	I1205 20:33:13.297846  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.297855  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:13.297861  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:13.297918  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:13.330696  585602 cri.go:89] found id: ""
	I1205 20:33:13.330724  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.330732  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:13.330738  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:13.330789  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:13.366257  585602 cri.go:89] found id: ""
	I1205 20:33:13.366304  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.366315  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:13.366321  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:13.366385  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:13.403994  585602 cri.go:89] found id: ""
	I1205 20:33:13.404030  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.404042  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:13.404051  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:13.404121  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:13.450160  585602 cri.go:89] found id: ""
	I1205 20:33:13.450189  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.450198  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:13.450205  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:13.450262  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:13.502593  585602 cri.go:89] found id: ""
	I1205 20:33:13.502629  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.502640  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:13.502650  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:13.502720  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:13.548051  585602 cri.go:89] found id: ""
	I1205 20:33:13.548084  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.548095  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:13.548103  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:13.548166  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:13.593913  585602 cri.go:89] found id: ""
	I1205 20:33:13.593947  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.593960  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:13.593975  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:13.593997  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:13.674597  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:13.674628  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:13.674647  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:13.760747  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:13.760796  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:13.804351  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:13.804383  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:13.856896  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:13.856958  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:16.372754  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:16.387165  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:16.387242  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:16.426612  585602 cri.go:89] found id: ""
	I1205 20:33:16.426655  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.426668  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:16.426676  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:16.426734  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:16.461936  585602 cri.go:89] found id: ""
	I1205 20:33:16.461974  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.461988  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:16.461997  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:16.462060  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:16.498010  585602 cri.go:89] found id: ""
	I1205 20:33:16.498044  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.498062  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:16.498069  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:16.498133  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:16.533825  585602 cri.go:89] found id: ""
	I1205 20:33:16.533854  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.533863  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:16.533869  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:16.533941  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:16.570834  585602 cri.go:89] found id: ""
	I1205 20:33:16.570875  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.570887  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:16.570896  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:16.570968  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:16.605988  585602 cri.go:89] found id: ""
	I1205 20:33:16.606026  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.606038  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:16.606047  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:16.606140  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:16.645148  585602 cri.go:89] found id: ""
	I1205 20:33:16.645178  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.645188  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:16.645195  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:16.645261  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:16.682449  585602 cri.go:89] found id: ""
	I1205 20:33:16.682479  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.682491  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:16.682502  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:16.682519  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:16.696944  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:16.696980  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:16.777034  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:16.777064  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:16.777078  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:14.164550  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:16.664527  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:17.543278  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:19.543404  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:16.621517  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:18.626303  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:16.854812  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:16.854880  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:16.905101  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:16.905131  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:19.463427  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:19.477135  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:19.477233  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:19.529213  585602 cri.go:89] found id: ""
	I1205 20:33:19.529248  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.529264  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:19.529274  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:19.529359  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:19.575419  585602 cri.go:89] found id: ""
	I1205 20:33:19.575453  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.575465  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:19.575474  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:19.575546  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:19.616657  585602 cri.go:89] found id: ""
	I1205 20:33:19.616691  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.616704  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:19.616713  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:19.616787  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:19.653142  585602 cri.go:89] found id: ""
	I1205 20:33:19.653177  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.653189  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:19.653198  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:19.653267  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:19.690504  585602 cri.go:89] found id: ""
	I1205 20:33:19.690544  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.690555  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:19.690563  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:19.690635  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:19.730202  585602 cri.go:89] found id: ""
	I1205 20:33:19.730229  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.730237  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:19.730245  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:19.730302  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:19.767212  585602 cri.go:89] found id: ""
	I1205 20:33:19.767243  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.767255  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:19.767264  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:19.767336  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:19.803089  585602 cri.go:89] found id: ""
	I1205 20:33:19.803125  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.803137  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:19.803163  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:19.803180  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:19.884542  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:19.884589  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:19.925257  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:19.925303  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:19.980457  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:19.980510  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:19.997026  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:19.997057  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:20.075062  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:18.664915  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:21.163064  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:22.042272  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:24.043822  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:21.120054  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:23.120944  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:25.618857  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:22.575469  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:22.588686  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:22.588768  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:22.622824  585602 cri.go:89] found id: ""
	I1205 20:33:22.622860  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.622868  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:22.622874  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:22.622931  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:22.659964  585602 cri.go:89] found id: ""
	I1205 20:33:22.660059  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.660074  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:22.660085  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:22.660153  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:22.695289  585602 cri.go:89] found id: ""
	I1205 20:33:22.695325  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.695337  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:22.695345  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:22.695417  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:22.734766  585602 cri.go:89] found id: ""
	I1205 20:33:22.734801  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.734813  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:22.734821  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:22.734896  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:22.773778  585602 cri.go:89] found id: ""
	I1205 20:33:22.773806  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.773818  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:22.773826  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:22.773899  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:22.811468  585602 cri.go:89] found id: ""
	I1205 20:33:22.811503  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.811514  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:22.811521  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:22.811591  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:22.852153  585602 cri.go:89] found id: ""
	I1205 20:33:22.852210  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.852221  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:22.852227  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:22.852318  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:22.888091  585602 cri.go:89] found id: ""
	I1205 20:33:22.888120  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.888129  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:22.888139  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:22.888155  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:22.943210  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:22.943252  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:22.958356  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:22.958393  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:23.026732  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:23.026770  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:23.026788  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:23.106356  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:23.106395  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:25.650832  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:25.665392  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:25.665475  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:25.701109  585602 cri.go:89] found id: ""
	I1205 20:33:25.701146  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.701155  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:25.701162  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:25.701231  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:25.738075  585602 cri.go:89] found id: ""
	I1205 20:33:25.738108  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.738117  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:25.738123  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:25.738176  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:25.775031  585602 cri.go:89] found id: ""
	I1205 20:33:25.775078  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.775090  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:25.775100  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:25.775173  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:25.811343  585602 cri.go:89] found id: ""
	I1205 20:33:25.811376  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.811386  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:25.811395  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:25.811471  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:25.846635  585602 cri.go:89] found id: ""
	I1205 20:33:25.846674  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.846684  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:25.846692  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:25.846766  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:25.881103  585602 cri.go:89] found id: ""
	I1205 20:33:25.881136  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.881145  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:25.881151  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:25.881224  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:25.917809  585602 cri.go:89] found id: ""
	I1205 20:33:25.917844  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.917855  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:25.917864  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:25.917936  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:25.955219  585602 cri.go:89] found id: ""
	I1205 20:33:25.955245  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.955254  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:25.955264  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:25.955276  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:26.007016  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:26.007059  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:26.021554  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:26.021601  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:26.099290  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:26.099321  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:26.099334  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:26.182955  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:26.182993  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:23.164876  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:25.665151  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:26.542519  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:28.542856  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:30.542941  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:27.621687  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:30.119140  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:28.725201  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:28.739515  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:28.739602  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:28.778187  585602 cri.go:89] found id: ""
	I1205 20:33:28.778230  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.778242  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:28.778249  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:28.778315  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:28.815788  585602 cri.go:89] found id: ""
	I1205 20:33:28.815826  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.815838  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:28.815845  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:28.815912  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:28.852222  585602 cri.go:89] found id: ""
	I1205 20:33:28.852251  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.852261  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:28.852289  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:28.852362  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:28.889742  585602 cri.go:89] found id: ""
	I1205 20:33:28.889776  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.889787  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:28.889794  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:28.889859  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:28.926872  585602 cri.go:89] found id: ""
	I1205 20:33:28.926903  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.926912  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:28.926919  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:28.926972  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:28.963380  585602 cri.go:89] found id: ""
	I1205 20:33:28.963418  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.963432  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:28.963441  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:28.963509  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:29.000711  585602 cri.go:89] found id: ""
	I1205 20:33:29.000746  585602 logs.go:282] 0 containers: []
	W1205 20:33:29.000764  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:29.000772  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:29.000848  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:29.035934  585602 cri.go:89] found id: ""
	I1205 20:33:29.035963  585602 logs.go:282] 0 containers: []
	W1205 20:33:29.035974  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:29.035987  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:29.036003  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:29.091336  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:29.091382  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:29.105784  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:29.105814  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:29.182038  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:29.182078  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:29.182095  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:29.261107  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:29.261153  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:31.802911  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:31.817285  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:31.817369  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:28.164470  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:30.664154  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:33.043654  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:35.044730  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:32.120759  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:34.619618  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:31.854865  585602 cri.go:89] found id: ""
	I1205 20:33:31.854900  585602 logs.go:282] 0 containers: []
	W1205 20:33:31.854914  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:31.854922  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:31.854995  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:31.893928  585602 cri.go:89] found id: ""
	I1205 20:33:31.893964  585602 logs.go:282] 0 containers: []
	W1205 20:33:31.893977  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:31.893984  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:31.894053  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:31.929490  585602 cri.go:89] found id: ""
	I1205 20:33:31.929527  585602 logs.go:282] 0 containers: []
	W1205 20:33:31.929540  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:31.929548  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:31.929637  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:31.964185  585602 cri.go:89] found id: ""
	I1205 20:33:31.964211  585602 logs.go:282] 0 containers: []
	W1205 20:33:31.964219  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:31.964225  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:31.964291  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:32.002708  585602 cri.go:89] found id: ""
	I1205 20:33:32.002748  585602 logs.go:282] 0 containers: []
	W1205 20:33:32.002760  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:32.002768  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:32.002847  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:32.040619  585602 cri.go:89] found id: ""
	I1205 20:33:32.040712  585602 logs.go:282] 0 containers: []
	W1205 20:33:32.040740  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:32.040758  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:32.040839  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:32.079352  585602 cri.go:89] found id: ""
	I1205 20:33:32.079390  585602 logs.go:282] 0 containers: []
	W1205 20:33:32.079404  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:32.079412  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:32.079484  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:32.117560  585602 cri.go:89] found id: ""
	I1205 20:33:32.117596  585602 logs.go:282] 0 containers: []
	W1205 20:33:32.117608  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:32.117629  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:32.117653  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:32.172639  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:32.172686  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:32.187687  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:32.187727  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:32.265000  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:32.265034  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:32.265051  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:32.348128  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:32.348176  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:34.890144  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:34.903953  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:34.904032  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:34.939343  585602 cri.go:89] found id: ""
	I1205 20:33:34.939374  585602 logs.go:282] 0 containers: []
	W1205 20:33:34.939383  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:34.939389  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:34.939444  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:34.978225  585602 cri.go:89] found id: ""
	I1205 20:33:34.978266  585602 logs.go:282] 0 containers: []
	W1205 20:33:34.978278  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:34.978286  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:34.978363  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:35.015918  585602 cri.go:89] found id: ""
	I1205 20:33:35.015950  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.015960  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:35.015966  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:35.016032  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:35.053222  585602 cri.go:89] found id: ""
	I1205 20:33:35.053249  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.053257  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:35.053264  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:35.053320  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:35.088369  585602 cri.go:89] found id: ""
	I1205 20:33:35.088401  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.088412  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:35.088421  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:35.088498  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:35.135290  585602 cri.go:89] found id: ""
	I1205 20:33:35.135327  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.135338  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:35.135346  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:35.135412  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:35.174959  585602 cri.go:89] found id: ""
	I1205 20:33:35.174996  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.175008  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:35.175017  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:35.175097  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:35.215101  585602 cri.go:89] found id: ""
	I1205 20:33:35.215134  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.215143  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:35.215152  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:35.215167  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:35.269372  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:35.269414  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:35.285745  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:35.285776  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:35.364774  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:35.364807  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:35.364824  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:35.445932  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:35.445980  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:33.163790  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:35.163966  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:37.164819  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:37.047128  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:39.543051  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:36.620450  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:39.120055  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:37.996837  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:38.010545  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:38.010612  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:38.048292  585602 cri.go:89] found id: ""
	I1205 20:33:38.048334  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.048350  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:38.048360  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:38.048429  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:38.086877  585602 cri.go:89] found id: ""
	I1205 20:33:38.086911  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.086921  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:38.086927  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:38.087001  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:38.122968  585602 cri.go:89] found id: ""
	I1205 20:33:38.122999  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.123010  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:38.123018  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:38.123082  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:38.164901  585602 cri.go:89] found id: ""
	I1205 20:33:38.164940  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.164949  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:38.164955  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:38.165006  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:38.200697  585602 cri.go:89] found id: ""
	I1205 20:33:38.200725  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.200734  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:38.200740  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:38.200803  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:38.240306  585602 cri.go:89] found id: ""
	I1205 20:33:38.240338  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.240347  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:38.240354  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:38.240424  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:38.275788  585602 cri.go:89] found id: ""
	I1205 20:33:38.275823  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.275835  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:38.275844  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:38.275917  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:38.311431  585602 cri.go:89] found id: ""
	I1205 20:33:38.311468  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.311480  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:38.311493  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:38.311507  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:38.361472  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:38.361515  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:38.375970  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:38.376004  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:38.450913  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:38.450941  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:38.450961  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:38.527620  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:38.527666  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:41.072438  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:41.086085  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:41.086168  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:41.123822  585602 cri.go:89] found id: ""
	I1205 20:33:41.123852  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.123861  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:41.123868  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:41.123919  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:41.160343  585602 cri.go:89] found id: ""
	I1205 20:33:41.160371  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.160380  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:41.160389  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:41.160457  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:41.198212  585602 cri.go:89] found id: ""
	I1205 20:33:41.198240  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.198249  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:41.198255  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:41.198309  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:41.233793  585602 cri.go:89] found id: ""
	I1205 20:33:41.233824  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.233832  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:41.233838  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:41.233890  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:41.269397  585602 cri.go:89] found id: ""
	I1205 20:33:41.269435  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.269447  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:41.269457  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:41.269529  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:41.303079  585602 cri.go:89] found id: ""
	I1205 20:33:41.303116  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.303128  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:41.303136  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:41.303196  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:41.337784  585602 cri.go:89] found id: ""
	I1205 20:33:41.337817  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.337826  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:41.337832  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:41.337901  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:41.371410  585602 cri.go:89] found id: ""
	I1205 20:33:41.371438  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.371446  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:41.371456  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:41.371467  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:41.422768  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:41.422807  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:41.437427  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:41.437461  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:41.510875  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:41.510898  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:41.510915  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:41.590783  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:41.590826  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:39.667344  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:42.172287  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:42.043022  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:44.543222  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:41.120670  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:43.622132  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:45.623483  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:44.136390  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:44.149935  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:44.150006  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:44.187807  585602 cri.go:89] found id: ""
	I1205 20:33:44.187846  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.187858  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:44.187866  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:44.187933  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:44.224937  585602 cri.go:89] found id: ""
	I1205 20:33:44.224965  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.224973  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:44.224978  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:44.225040  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:44.260230  585602 cri.go:89] found id: ""
	I1205 20:33:44.260274  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.260287  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:44.260297  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:44.260439  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:44.296410  585602 cri.go:89] found id: ""
	I1205 20:33:44.296439  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.296449  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:44.296455  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:44.296507  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:44.332574  585602 cri.go:89] found id: ""
	I1205 20:33:44.332623  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.332635  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:44.332642  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:44.332709  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:44.368925  585602 cri.go:89] found id: ""
	I1205 20:33:44.368973  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.368985  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:44.368994  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:44.369068  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:44.410041  585602 cri.go:89] found id: ""
	I1205 20:33:44.410075  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.410088  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:44.410095  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:44.410165  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:44.454254  585602 cri.go:89] found id: ""
	I1205 20:33:44.454295  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.454316  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:44.454330  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:44.454346  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:44.507604  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:44.507669  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:44.525172  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:44.525219  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:44.599417  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:44.599446  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:44.599465  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:44.681624  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:44.681685  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:44.664942  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:47.163452  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:47.043225  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:49.044675  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:48.120302  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:50.120568  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:47.230092  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:47.243979  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:47.244076  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:47.280346  585602 cri.go:89] found id: ""
	I1205 20:33:47.280376  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.280385  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:47.280392  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:47.280448  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:47.316454  585602 cri.go:89] found id: ""
	I1205 20:33:47.316479  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.316487  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:47.316493  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:47.316546  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:47.353339  585602 cri.go:89] found id: ""
	I1205 20:33:47.353374  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.353386  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:47.353395  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:47.353466  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:47.388256  585602 cri.go:89] found id: ""
	I1205 20:33:47.388319  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.388330  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:47.388339  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:47.388408  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:47.424907  585602 cri.go:89] found id: ""
	I1205 20:33:47.424942  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.424953  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:47.424961  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:47.425035  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:47.461386  585602 cri.go:89] found id: ""
	I1205 20:33:47.461416  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.461425  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:47.461431  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:47.461485  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:47.501092  585602 cri.go:89] found id: ""
	I1205 20:33:47.501121  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.501130  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:47.501136  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:47.501189  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:47.559478  585602 cri.go:89] found id: ""
	I1205 20:33:47.559507  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.559520  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:47.559533  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:47.559551  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:47.609761  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:47.609800  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:47.626579  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:47.626606  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:47.713490  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:47.713520  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:47.713540  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:47.795346  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:47.795398  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:50.339441  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:50.353134  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:50.353216  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:50.393950  585602 cri.go:89] found id: ""
	I1205 20:33:50.393979  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.393990  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:50.394007  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:50.394074  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:50.431166  585602 cri.go:89] found id: ""
	I1205 20:33:50.431201  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.431212  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:50.431221  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:50.431291  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:50.472641  585602 cri.go:89] found id: ""
	I1205 20:33:50.472674  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.472684  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:50.472692  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:50.472763  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:50.512111  585602 cri.go:89] found id: ""
	I1205 20:33:50.512152  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.512165  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:50.512173  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:50.512247  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:50.554500  585602 cri.go:89] found id: ""
	I1205 20:33:50.554536  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.554549  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:50.554558  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:50.554625  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:50.590724  585602 cri.go:89] found id: ""
	I1205 20:33:50.590755  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.590764  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:50.590771  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:50.590837  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:50.628640  585602 cri.go:89] found id: ""
	I1205 20:33:50.628666  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.628675  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:50.628681  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:50.628732  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:50.670009  585602 cri.go:89] found id: ""
	I1205 20:33:50.670039  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.670047  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:50.670063  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:50.670075  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:50.684236  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:50.684290  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:50.757761  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:50.757790  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:50.757813  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:50.839665  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:50.839720  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:50.881087  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:50.881122  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:49.164986  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:51.665655  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:51.543286  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:53.543689  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:52.621297  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:54.621764  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:53.433345  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:53.446747  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:53.446819  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:53.482928  585602 cri.go:89] found id: ""
	I1205 20:33:53.482967  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.482979  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:53.482988  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:53.483048  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:53.519096  585602 cri.go:89] found id: ""
	I1205 20:33:53.519128  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.519136  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:53.519142  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:53.519196  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:53.556207  585602 cri.go:89] found id: ""
	I1205 20:33:53.556233  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.556243  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:53.556249  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:53.556346  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:53.589708  585602 cri.go:89] found id: ""
	I1205 20:33:53.589736  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.589745  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:53.589758  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:53.589813  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:53.630344  585602 cri.go:89] found id: ""
	I1205 20:33:53.630371  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.630380  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:53.630386  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:53.630438  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:53.668895  585602 cri.go:89] found id: ""
	I1205 20:33:53.668921  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.668929  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:53.668935  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:53.668987  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:53.706601  585602 cri.go:89] found id: ""
	I1205 20:33:53.706628  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.706638  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:53.706644  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:53.706704  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:53.744922  585602 cri.go:89] found id: ""
	I1205 20:33:53.744952  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.744960  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:53.744970  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:53.744989  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:53.823816  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:53.823853  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:53.823928  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:53.905075  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:53.905118  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:53.955424  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:53.955468  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:54.014871  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:54.014916  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:56.537142  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:56.550409  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:56.550478  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:56.587148  585602 cri.go:89] found id: ""
	I1205 20:33:56.587174  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.587184  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:56.587190  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:56.587249  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:56.625153  585602 cri.go:89] found id: ""
	I1205 20:33:56.625180  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.625188  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:56.625193  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:56.625243  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:56.671545  585602 cri.go:89] found id: ""
	I1205 20:33:56.671573  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.671582  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:56.671589  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:56.671652  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:56.712760  585602 cri.go:89] found id: ""
	I1205 20:33:56.712797  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.712810  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:56.712818  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:56.712890  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:56.751219  585602 cri.go:89] found id: ""
	I1205 20:33:56.751254  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.751266  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:56.751274  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:56.751340  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:56.787946  585602 cri.go:89] found id: ""
	I1205 20:33:56.787985  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.787998  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:56.788007  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:56.788101  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:56.823057  585602 cri.go:89] found id: ""
	I1205 20:33:56.823095  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.823108  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:56.823114  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:56.823170  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:54.164074  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:56.165063  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:56.043193  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:58.044158  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:00.542798  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:56.624407  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:59.119743  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:56.860358  585602 cri.go:89] found id: ""
	I1205 20:33:56.860396  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.860408  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:56.860421  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:56.860438  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:56.912954  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:56.912996  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:56.927642  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:56.927691  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:57.007316  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:57.007344  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:57.007359  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:57.091471  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:57.091522  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:59.642150  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:59.656240  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:59.656324  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:59.695918  585602 cri.go:89] found id: ""
	I1205 20:33:59.695954  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.695965  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:59.695973  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:59.696037  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:59.744218  585602 cri.go:89] found id: ""
	I1205 20:33:59.744250  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.744260  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:59.744278  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:59.744340  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:59.799035  585602 cri.go:89] found id: ""
	I1205 20:33:59.799081  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.799094  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:59.799102  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:59.799172  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:59.850464  585602 cri.go:89] found id: ""
	I1205 20:33:59.850505  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.850517  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:59.850526  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:59.850590  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:59.886441  585602 cri.go:89] found id: ""
	I1205 20:33:59.886477  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.886489  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:59.886497  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:59.886564  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:59.926689  585602 cri.go:89] found id: ""
	I1205 20:33:59.926728  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.926741  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:59.926751  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:59.926821  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:59.962615  585602 cri.go:89] found id: ""
	I1205 20:33:59.962644  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.962653  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:59.962659  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:59.962716  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:00.001852  585602 cri.go:89] found id: ""
	I1205 20:34:00.001878  585602 logs.go:282] 0 containers: []
	W1205 20:34:00.001886  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:00.001897  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:00.001913  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:00.055465  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:00.055508  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:00.071904  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:00.071941  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:00.151225  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:00.151248  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:00.151262  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:00.233869  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:00.233914  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:58.664773  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:00.664948  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:02.543019  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:04.543810  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:01.120136  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:03.120824  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:05.620283  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:02.776751  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:02.790868  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:02.790945  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:02.834686  585602 cri.go:89] found id: ""
	I1205 20:34:02.834719  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.834731  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:02.834740  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:02.834823  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:02.871280  585602 cri.go:89] found id: ""
	I1205 20:34:02.871313  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.871333  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:02.871342  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:02.871413  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:02.907300  585602 cri.go:89] found id: ""
	I1205 20:34:02.907336  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.907346  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:02.907352  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:02.907406  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:02.945453  585602 cri.go:89] found id: ""
	I1205 20:34:02.945487  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.945499  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:02.945511  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:02.945587  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:02.980528  585602 cri.go:89] found id: ""
	I1205 20:34:02.980561  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.980573  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:02.980580  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:02.980653  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:03.016919  585602 cri.go:89] found id: ""
	I1205 20:34:03.016946  585602 logs.go:282] 0 containers: []
	W1205 20:34:03.016955  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:03.016961  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:03.017012  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:03.053541  585602 cri.go:89] found id: ""
	I1205 20:34:03.053575  585602 logs.go:282] 0 containers: []
	W1205 20:34:03.053588  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:03.053596  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:03.053655  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:03.089907  585602 cri.go:89] found id: ""
	I1205 20:34:03.089946  585602 logs.go:282] 0 containers: []
	W1205 20:34:03.089959  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:03.089974  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:03.089991  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:03.144663  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:03.144700  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:03.160101  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:03.160140  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:03.231559  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:03.231583  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:03.231600  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:03.313226  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:03.313271  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:05.855538  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:05.869019  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:05.869120  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:05.906879  585602 cri.go:89] found id: ""
	I1205 20:34:05.906910  585602 logs.go:282] 0 containers: []
	W1205 20:34:05.906921  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:05.906928  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:05.906994  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:05.946846  585602 cri.go:89] found id: ""
	I1205 20:34:05.946881  585602 logs.go:282] 0 containers: []
	W1205 20:34:05.946893  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:05.946900  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:05.946968  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:05.984067  585602 cri.go:89] found id: ""
	I1205 20:34:05.984104  585602 logs.go:282] 0 containers: []
	W1205 20:34:05.984118  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:05.984127  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:05.984193  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:06.024984  585602 cri.go:89] found id: ""
	I1205 20:34:06.025014  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.025023  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:06.025029  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:06.025091  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:06.064766  585602 cri.go:89] found id: ""
	I1205 20:34:06.064794  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.064806  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:06.064821  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:06.064877  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:06.105652  585602 cri.go:89] found id: ""
	I1205 20:34:06.105683  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.105691  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:06.105698  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:06.105748  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:06.143732  585602 cri.go:89] found id: ""
	I1205 20:34:06.143762  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.143773  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:06.143781  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:06.143857  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:06.183397  585602 cri.go:89] found id: ""
	I1205 20:34:06.183429  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.183439  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:06.183449  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:06.183462  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:06.236403  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:06.236449  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:06.250728  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:06.250759  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:06.320983  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:06.321009  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:06.321025  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:06.408037  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:06.408084  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:03.164354  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:05.665345  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:07.044218  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:09.543580  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:08.119532  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:10.119918  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:08.955959  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:08.968956  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:08.969037  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:09.002804  585602 cri.go:89] found id: ""
	I1205 20:34:09.002846  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.002859  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:09.002866  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:09.002935  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:09.039098  585602 cri.go:89] found id: ""
	I1205 20:34:09.039191  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.039210  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:09.039220  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:09.039291  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:09.074727  585602 cri.go:89] found id: ""
	I1205 20:34:09.074764  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.074776  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:09.074792  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:09.074861  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:09.112650  585602 cri.go:89] found id: ""
	I1205 20:34:09.112682  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.112692  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:09.112698  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:09.112754  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:09.149301  585602 cri.go:89] found id: ""
	I1205 20:34:09.149346  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.149359  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:09.149368  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:09.149432  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:09.190288  585602 cri.go:89] found id: ""
	I1205 20:34:09.190317  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.190329  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:09.190338  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:09.190404  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:09.225311  585602 cri.go:89] found id: ""
	I1205 20:34:09.225348  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.225361  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:09.225369  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:09.225435  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:09.261023  585602 cri.go:89] found id: ""
	I1205 20:34:09.261052  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.261063  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:09.261075  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:09.261092  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:09.313733  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:09.313785  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:09.329567  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:09.329619  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:09.403397  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:09.403430  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:09.403447  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:09.486586  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:09.486630  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:08.163730  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:10.663603  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:12.665663  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:11.544538  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:14.042854  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:12.120629  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:14.621977  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:12.028110  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:12.041802  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:12.041866  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:12.080349  585602 cri.go:89] found id: ""
	I1205 20:34:12.080388  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.080402  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:12.080410  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:12.080475  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:12.121455  585602 cri.go:89] found id: ""
	I1205 20:34:12.121486  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.121499  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:12.121507  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:12.121567  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:12.157743  585602 cri.go:89] found id: ""
	I1205 20:34:12.157768  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.157785  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:12.157794  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:12.157855  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:12.196901  585602 cri.go:89] found id: ""
	I1205 20:34:12.196933  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.196946  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:12.196954  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:12.197024  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:12.234471  585602 cri.go:89] found id: ""
	I1205 20:34:12.234500  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.234508  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:12.234516  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:12.234585  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:12.269238  585602 cri.go:89] found id: ""
	I1205 20:34:12.269263  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.269271  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:12.269278  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:12.269340  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:12.307965  585602 cri.go:89] found id: ""
	I1205 20:34:12.308006  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.308016  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:12.308022  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:12.308081  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:12.343463  585602 cri.go:89] found id: ""
	I1205 20:34:12.343497  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.343510  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:12.343536  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:12.343574  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:12.393393  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:12.393437  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:12.407991  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:12.408025  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:12.477868  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:12.477910  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:12.477924  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:12.557274  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:12.557315  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:15.102587  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:15.115734  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:15.115808  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:15.153057  585602 cri.go:89] found id: ""
	I1205 20:34:15.153091  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.153105  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:15.153113  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:15.153182  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:15.192762  585602 cri.go:89] found id: ""
	I1205 20:34:15.192815  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.192825  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:15.192831  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:15.192887  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:15.231330  585602 cri.go:89] found id: ""
	I1205 20:34:15.231364  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.231374  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:15.231380  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:15.231435  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:15.265229  585602 cri.go:89] found id: ""
	I1205 20:34:15.265262  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.265271  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:15.265278  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:15.265350  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:15.299596  585602 cri.go:89] found id: ""
	I1205 20:34:15.299624  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.299634  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:15.299640  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:15.299699  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:15.336155  585602 cri.go:89] found id: ""
	I1205 20:34:15.336187  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.336195  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:15.336202  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:15.336256  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:15.371867  585602 cri.go:89] found id: ""
	I1205 20:34:15.371899  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.371909  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:15.371920  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:15.371976  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:15.408536  585602 cri.go:89] found id: ""
	I1205 20:34:15.408566  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.408580  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:15.408592  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:15.408609  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:15.422499  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:15.422538  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:15.495096  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:15.495131  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:15.495145  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:15.571411  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:15.571461  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:15.612284  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:15.612319  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:15.165343  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:17.165619  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:16.043962  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:18.542495  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:17.119936  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:19.622046  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:18.168869  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:18.184247  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:18.184370  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:18.226078  585602 cri.go:89] found id: ""
	I1205 20:34:18.226112  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.226124  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:18.226133  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:18.226202  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:18.266221  585602 cri.go:89] found id: ""
	I1205 20:34:18.266258  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.266270  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:18.266278  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:18.266349  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:18.305876  585602 cri.go:89] found id: ""
	I1205 20:34:18.305903  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.305912  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:18.305921  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:18.305971  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:18.342044  585602 cri.go:89] found id: ""
	I1205 20:34:18.342077  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.342089  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:18.342098  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:18.342160  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:18.380240  585602 cri.go:89] found id: ""
	I1205 20:34:18.380290  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.380301  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:18.380310  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:18.380372  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:18.416228  585602 cri.go:89] found id: ""
	I1205 20:34:18.416258  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.416301  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:18.416311  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:18.416380  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:18.453368  585602 cri.go:89] found id: ""
	I1205 20:34:18.453407  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.453420  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:18.453429  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:18.453513  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:18.491689  585602 cri.go:89] found id: ""
	I1205 20:34:18.491727  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.491739  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:18.491754  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:18.491779  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:18.546614  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:18.546652  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:18.560516  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:18.560547  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:18.637544  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:18.637568  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:18.637582  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:18.720410  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:18.720453  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:21.261494  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:21.276378  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:21.276473  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:21.317571  585602 cri.go:89] found id: ""
	I1205 20:34:21.317602  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.317610  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:21.317617  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:21.317670  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:21.355174  585602 cri.go:89] found id: ""
	I1205 20:34:21.355202  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.355210  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:21.355217  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:21.355277  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:21.393259  585602 cri.go:89] found id: ""
	I1205 20:34:21.393297  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.393310  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:21.393317  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:21.393408  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:21.432286  585602 cri.go:89] found id: ""
	I1205 20:34:21.432329  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.432341  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:21.432348  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:21.432415  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:21.469844  585602 cri.go:89] found id: ""
	I1205 20:34:21.469877  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.469888  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:21.469896  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:21.469964  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:21.508467  585602 cri.go:89] found id: ""
	I1205 20:34:21.508507  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.508519  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:21.508528  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:21.508592  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:21.553053  585602 cri.go:89] found id: ""
	I1205 20:34:21.553185  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.553208  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:21.553226  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:21.553317  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:21.590595  585602 cri.go:89] found id: ""
	I1205 20:34:21.590629  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.590640  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:21.590654  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:21.590672  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:21.649493  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:21.649546  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:21.666114  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:21.666147  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:21.742801  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:21.742828  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:21.742858  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:21.822949  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:21.823010  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:19.165951  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:21.664450  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:21.043233  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:23.043477  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:25.543490  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:22.119177  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:24.119685  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:24.366575  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:24.380894  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:24.380992  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:24.416907  585602 cri.go:89] found id: ""
	I1205 20:34:24.416943  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.416956  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:24.416965  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:24.417034  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:24.453303  585602 cri.go:89] found id: ""
	I1205 20:34:24.453337  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.453349  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:24.453358  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:24.453445  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:24.496795  585602 cri.go:89] found id: ""
	I1205 20:34:24.496825  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.496833  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:24.496839  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:24.496907  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:24.539105  585602 cri.go:89] found id: ""
	I1205 20:34:24.539142  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.539154  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:24.539162  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:24.539230  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:24.576778  585602 cri.go:89] found id: ""
	I1205 20:34:24.576808  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.576816  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:24.576822  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:24.576879  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:24.617240  585602 cri.go:89] found id: ""
	I1205 20:34:24.617271  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.617280  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:24.617293  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:24.617374  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:24.659274  585602 cri.go:89] found id: ""
	I1205 20:34:24.659316  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.659330  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:24.659342  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:24.659408  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:24.701047  585602 cri.go:89] found id: ""
	I1205 20:34:24.701092  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.701105  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:24.701121  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:24.701139  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:24.741070  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:24.741115  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:24.793364  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:24.793407  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:24.807803  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:24.807839  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:24.883194  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:24.883225  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:24.883243  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:24.163198  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:26.165402  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:27.544607  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:30.044244  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:26.619847  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:28.621467  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:30.621704  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:27.467460  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:27.483055  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:27.483129  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:27.523718  585602 cri.go:89] found id: ""
	I1205 20:34:27.523752  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.523763  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:27.523772  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:27.523841  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:27.562872  585602 cri.go:89] found id: ""
	I1205 20:34:27.562899  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.562908  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:27.562915  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:27.562976  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:27.601804  585602 cri.go:89] found id: ""
	I1205 20:34:27.601835  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.601845  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:27.601852  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:27.601916  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:27.640553  585602 cri.go:89] found id: ""
	I1205 20:34:27.640589  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.640599  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:27.640605  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:27.640672  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:27.680983  585602 cri.go:89] found id: ""
	I1205 20:34:27.681015  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.681027  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:27.681035  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:27.681105  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:27.720766  585602 cri.go:89] found id: ""
	I1205 20:34:27.720811  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.720821  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:27.720828  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:27.720886  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:27.761422  585602 cri.go:89] found id: ""
	I1205 20:34:27.761453  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.761466  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:27.761480  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:27.761550  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:27.799658  585602 cri.go:89] found id: ""
	I1205 20:34:27.799692  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.799705  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:27.799720  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:27.799736  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:27.851801  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:27.851845  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:27.865953  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:27.865984  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:27.941787  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:27.941824  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:27.941840  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:28.023556  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:28.023616  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:30.573267  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:30.586591  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:30.586679  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:30.629923  585602 cri.go:89] found id: ""
	I1205 20:34:30.629960  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.629974  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:30.629982  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:30.630048  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:30.667045  585602 cri.go:89] found id: ""
	I1205 20:34:30.667078  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.667090  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:30.667098  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:30.667167  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:30.704479  585602 cri.go:89] found id: ""
	I1205 20:34:30.704510  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.704522  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:30.704530  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:30.704620  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:30.746035  585602 cri.go:89] found id: ""
	I1205 20:34:30.746065  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.746077  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:30.746085  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:30.746161  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:30.784375  585602 cri.go:89] found id: ""
	I1205 20:34:30.784415  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.784425  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:30.784431  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:30.784487  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:30.821779  585602 cri.go:89] found id: ""
	I1205 20:34:30.821811  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.821822  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:30.821831  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:30.821905  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:30.856927  585602 cri.go:89] found id: ""
	I1205 20:34:30.856963  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.856976  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:30.856984  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:30.857088  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:30.895852  585602 cri.go:89] found id: ""
	I1205 20:34:30.895882  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.895894  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:30.895914  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:30.895930  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:30.947600  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:30.947642  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:30.962717  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:30.962753  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:31.049225  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:31.049262  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:31.049280  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:31.126806  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:31.126850  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:28.665006  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:31.164172  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:32.548634  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:35.042159  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:33.120370  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:35.621247  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:33.670844  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:33.685063  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:33.685160  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:33.718277  585602 cri.go:89] found id: ""
	I1205 20:34:33.718312  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.718321  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:33.718327  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:33.718378  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:33.755409  585602 cri.go:89] found id: ""
	I1205 20:34:33.755445  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.755456  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:33.755465  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:33.755542  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:33.809447  585602 cri.go:89] found id: ""
	I1205 20:34:33.809506  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.809519  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:33.809527  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:33.809599  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:33.848327  585602 cri.go:89] found id: ""
	I1205 20:34:33.848362  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.848376  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:33.848384  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:33.848444  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:33.887045  585602 cri.go:89] found id: ""
	I1205 20:34:33.887082  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.887094  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:33.887103  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:33.887178  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:33.924385  585602 cri.go:89] found id: ""
	I1205 20:34:33.924418  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.924427  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:33.924434  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:33.924499  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:33.960711  585602 cri.go:89] found id: ""
	I1205 20:34:33.960738  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.960747  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:33.960757  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:33.960808  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:33.998150  585602 cri.go:89] found id: ""
	I1205 20:34:33.998184  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.998193  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:33.998203  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:33.998215  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:34.041977  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:34.042006  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:34.095895  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:34.095940  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:34.109802  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:34.109836  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:34.185716  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:34.185740  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:34.185753  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:36.767768  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:36.782114  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:36.782201  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:36.820606  585602 cri.go:89] found id: ""
	I1205 20:34:36.820647  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.820659  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:36.820668  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:36.820736  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:33.164572  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:35.664069  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:37.043102  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:39.544667  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:38.120555  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:40.619948  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:36.858999  585602 cri.go:89] found id: ""
	I1205 20:34:36.859033  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.859044  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:36.859051  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:36.859117  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:36.896222  585602 cri.go:89] found id: ""
	I1205 20:34:36.896257  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.896282  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:36.896290  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:36.896352  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:36.935565  585602 cri.go:89] found id: ""
	I1205 20:34:36.935602  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.935612  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:36.935618  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:36.935671  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:36.974031  585602 cri.go:89] found id: ""
	I1205 20:34:36.974066  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.974079  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:36.974096  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:36.974166  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:37.018243  585602 cri.go:89] found id: ""
	I1205 20:34:37.018278  585602 logs.go:282] 0 containers: []
	W1205 20:34:37.018290  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:37.018300  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:37.018371  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:37.057715  585602 cri.go:89] found id: ""
	I1205 20:34:37.057742  585602 logs.go:282] 0 containers: []
	W1205 20:34:37.057750  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:37.057756  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:37.057806  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:37.099006  585602 cri.go:89] found id: ""
	I1205 20:34:37.099037  585602 logs.go:282] 0 containers: []
	W1205 20:34:37.099045  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:37.099055  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:37.099070  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:37.186218  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:37.186264  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:37.232921  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:37.232955  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:37.285539  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:37.285581  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:37.301115  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:37.301155  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:37.373249  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:39.873692  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:39.887772  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:39.887847  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:39.925558  585602 cri.go:89] found id: ""
	I1205 20:34:39.925595  585602 logs.go:282] 0 containers: []
	W1205 20:34:39.925607  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:39.925615  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:39.925684  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:39.964967  585602 cri.go:89] found id: ""
	I1205 20:34:39.964994  585602 logs.go:282] 0 containers: []
	W1205 20:34:39.965004  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:39.965011  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:39.965073  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:40.010875  585602 cri.go:89] found id: ""
	I1205 20:34:40.010911  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.010923  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:40.010930  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:40.011003  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:40.050940  585602 cri.go:89] found id: ""
	I1205 20:34:40.050970  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.050981  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:40.050990  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:40.051052  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:40.086157  585602 cri.go:89] found id: ""
	I1205 20:34:40.086197  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.086210  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:40.086219  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:40.086283  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:40.123280  585602 cri.go:89] found id: ""
	I1205 20:34:40.123321  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.123333  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:40.123344  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:40.123414  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:40.164755  585602 cri.go:89] found id: ""
	I1205 20:34:40.164784  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.164793  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:40.164800  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:40.164871  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:40.211566  585602 cri.go:89] found id: ""
	I1205 20:34:40.211595  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.211608  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:40.211621  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:40.211638  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:40.275269  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:40.275326  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:40.303724  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:40.303754  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:40.377315  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:40.377345  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:40.377360  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:40.457744  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:40.457794  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:38.163598  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:40.164173  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:42.663952  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:42.043947  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:44.542445  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:42.621824  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:45.120127  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:43.000390  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:43.015220  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:43.015308  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:43.051919  585602 cri.go:89] found id: ""
	I1205 20:34:43.051946  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.051955  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:43.051961  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:43.052034  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:43.088188  585602 cri.go:89] found id: ""
	I1205 20:34:43.088230  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.088241  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:43.088249  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:43.088350  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:43.125881  585602 cri.go:89] found id: ""
	I1205 20:34:43.125910  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.125922  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:43.125930  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:43.125988  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:43.166630  585602 cri.go:89] found id: ""
	I1205 20:34:43.166657  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.166674  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:43.166682  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:43.166744  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:43.206761  585602 cri.go:89] found id: ""
	I1205 20:34:43.206791  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.206803  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:43.206810  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:43.206873  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:43.242989  585602 cri.go:89] found id: ""
	I1205 20:34:43.243017  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.243026  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:43.243033  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:43.243094  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:43.281179  585602 cri.go:89] found id: ""
	I1205 20:34:43.281208  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.281217  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:43.281223  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:43.281272  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:43.317283  585602 cri.go:89] found id: ""
	I1205 20:34:43.317314  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.317326  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:43.317347  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:43.317362  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:43.369262  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:43.369303  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:43.386137  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:43.386182  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:43.458532  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:43.458553  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:43.458566  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:43.538254  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:43.538296  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:46.083593  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:46.101024  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:46.101133  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:46.169786  585602 cri.go:89] found id: ""
	I1205 20:34:46.169817  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.169829  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:46.169838  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:46.169905  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:46.218647  585602 cri.go:89] found id: ""
	I1205 20:34:46.218689  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.218704  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:46.218713  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:46.218790  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:46.262718  585602 cri.go:89] found id: ""
	I1205 20:34:46.262749  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.262758  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:46.262764  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:46.262846  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:46.301606  585602 cri.go:89] found id: ""
	I1205 20:34:46.301638  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.301649  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:46.301656  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:46.301714  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:46.337313  585602 cri.go:89] found id: ""
	I1205 20:34:46.337347  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.337356  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:46.337362  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:46.337422  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:46.380171  585602 cri.go:89] found id: ""
	I1205 20:34:46.380201  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.380209  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:46.380215  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:46.380288  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:46.423054  585602 cri.go:89] found id: ""
	I1205 20:34:46.423089  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.423101  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:46.423109  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:46.423178  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:46.467615  585602 cri.go:89] found id: ""
	I1205 20:34:46.467647  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.467659  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:46.467673  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:46.467687  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:46.522529  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:46.522579  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:46.537146  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:46.537199  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:46.609585  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:46.609618  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:46.609637  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:46.696093  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:46.696152  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:45.164249  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:47.664159  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:46.547883  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:49.043793  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:47.623375  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:50.122680  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:49.238735  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:49.256406  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:49.256484  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:49.294416  585602 cri.go:89] found id: ""
	I1205 20:34:49.294449  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.294458  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:49.294467  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:49.294528  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:49.334235  585602 cri.go:89] found id: ""
	I1205 20:34:49.334268  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.334282  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:49.334290  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:49.334362  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:49.372560  585602 cri.go:89] found id: ""
	I1205 20:34:49.372637  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.372662  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:49.372674  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:49.372756  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:49.413779  585602 cri.go:89] found id: ""
	I1205 20:34:49.413813  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.413822  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:49.413829  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:49.413900  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:49.449513  585602 cri.go:89] found id: ""
	I1205 20:34:49.449543  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.449553  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:49.449560  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:49.449630  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:49.488923  585602 cri.go:89] found id: ""
	I1205 20:34:49.488961  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.488973  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:49.488982  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:49.489050  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:49.524922  585602 cri.go:89] found id: ""
	I1205 20:34:49.524959  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.524971  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:49.524980  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:49.525048  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:49.565700  585602 cri.go:89] found id: ""
	I1205 20:34:49.565735  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.565745  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:49.565756  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:49.565769  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:49.624297  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:49.624339  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:49.641424  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:49.641465  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:49.721474  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:49.721504  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:49.721517  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:49.810777  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:49.810822  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:49.664998  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:52.163337  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:51.543015  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:54.045218  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:52.621649  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:55.120035  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:52.354661  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:52.368481  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:52.368555  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:52.407081  585602 cri.go:89] found id: ""
	I1205 20:34:52.407110  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.407118  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:52.407125  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:52.407189  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:52.444462  585602 cri.go:89] found id: ""
	I1205 20:34:52.444489  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.444498  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:52.444505  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:52.444562  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:52.483546  585602 cri.go:89] found id: ""
	I1205 20:34:52.483573  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.483582  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:52.483595  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:52.483648  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:52.526529  585602 cri.go:89] found id: ""
	I1205 20:34:52.526567  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.526579  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:52.526587  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:52.526655  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:52.564875  585602 cri.go:89] found id: ""
	I1205 20:34:52.564904  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.564913  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:52.564919  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:52.564984  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:52.599367  585602 cri.go:89] found id: ""
	I1205 20:34:52.599397  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.599410  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:52.599419  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:52.599475  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:52.638192  585602 cri.go:89] found id: ""
	I1205 20:34:52.638233  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.638247  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:52.638255  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:52.638336  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:52.675227  585602 cri.go:89] found id: ""
	I1205 20:34:52.675264  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.675275  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:52.675287  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:52.675311  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:52.716538  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:52.716582  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:52.772121  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:52.772162  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:52.787598  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:52.787632  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:52.865380  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:52.865408  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:52.865422  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:55.449288  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:55.462386  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:55.462474  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:55.498350  585602 cri.go:89] found id: ""
	I1205 20:34:55.498382  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.498391  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:55.498397  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:55.498457  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:55.540878  585602 cri.go:89] found id: ""
	I1205 20:34:55.540915  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.540929  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:55.540939  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:55.541022  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:55.577248  585602 cri.go:89] found id: ""
	I1205 20:34:55.577277  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.577288  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:55.577294  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:55.577375  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:55.615258  585602 cri.go:89] found id: ""
	I1205 20:34:55.615287  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.615308  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:55.615316  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:55.615384  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:55.652102  585602 cri.go:89] found id: ""
	I1205 20:34:55.652136  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.652147  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:55.652157  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:55.652228  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:55.689353  585602 cri.go:89] found id: ""
	I1205 20:34:55.689387  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.689399  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:55.689408  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:55.689486  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:55.727603  585602 cri.go:89] found id: ""
	I1205 20:34:55.727634  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.727648  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:55.727657  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:55.727729  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:55.765103  585602 cri.go:89] found id: ""
	I1205 20:34:55.765134  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.765143  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:55.765156  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:55.765169  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:55.823878  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:55.823923  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:55.838966  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:55.839001  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:55.909385  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:55.909412  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:55.909424  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:55.992036  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:55.992080  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:54.165488  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:56.166030  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:56.542663  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:58.543260  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:57.120140  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:59.621190  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:58.537231  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:58.552307  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:58.552392  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:58.589150  585602 cri.go:89] found id: ""
	I1205 20:34:58.589184  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.589200  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:58.589206  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:58.589272  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:58.630344  585602 cri.go:89] found id: ""
	I1205 20:34:58.630370  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.630378  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:58.630385  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:58.630452  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:58.669953  585602 cri.go:89] found id: ""
	I1205 20:34:58.669981  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.669991  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:58.669999  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:58.670055  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:58.708532  585602 cri.go:89] found id: ""
	I1205 20:34:58.708562  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.708570  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:58.708577  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:58.708631  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:58.745944  585602 cri.go:89] found id: ""
	I1205 20:34:58.745975  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.745986  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:58.745994  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:58.746051  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:58.787177  585602 cri.go:89] found id: ""
	I1205 20:34:58.787206  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.787214  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:58.787221  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:58.787272  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:58.822084  585602 cri.go:89] found id: ""
	I1205 20:34:58.822123  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.822134  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:58.822142  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:58.822210  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:58.858608  585602 cri.go:89] found id: ""
	I1205 20:34:58.858645  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.858657  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:58.858670  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:58.858691  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:58.873289  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:58.873322  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:58.947855  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:58.947884  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:58.947900  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:59.028348  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:59.028397  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:59.069172  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:59.069206  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:01.623309  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:01.637362  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:01.637449  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:01.678867  585602 cri.go:89] found id: ""
	I1205 20:35:01.678907  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.678919  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:01.678928  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:01.679001  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:01.715333  585602 cri.go:89] found id: ""
	I1205 20:35:01.715364  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.715372  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:01.715379  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:01.715439  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:01.754247  585602 cri.go:89] found id: ""
	I1205 20:35:01.754277  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.754286  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:01.754292  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:01.754348  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:01.791922  585602 cri.go:89] found id: ""
	I1205 20:35:01.791957  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.791968  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:01.791977  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:01.792045  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:01.827261  585602 cri.go:89] found id: ""
	I1205 20:35:01.827294  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.827307  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:01.827315  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:01.827389  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:58.665248  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:01.163431  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:01.043056  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:03.543015  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:02.122540  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:04.620544  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:01.864205  585602 cri.go:89] found id: ""
	I1205 20:35:01.864234  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.864243  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:01.864249  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:01.864332  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:01.902740  585602 cri.go:89] found id: ""
	I1205 20:35:01.902773  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.902783  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:01.902789  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:01.902857  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:01.941627  585602 cri.go:89] found id: ""
	I1205 20:35:01.941657  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.941666  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:01.941677  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:01.941690  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:01.995743  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:01.995791  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:02.010327  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:02.010368  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:02.086879  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:02.086907  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:02.086921  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:02.166500  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:02.166538  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:04.716638  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:04.730922  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:04.730992  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:04.768492  585602 cri.go:89] found id: ""
	I1205 20:35:04.768524  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.768534  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:04.768540  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:04.768606  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:04.803740  585602 cri.go:89] found id: ""
	I1205 20:35:04.803776  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.803789  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:04.803797  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:04.803866  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:04.840907  585602 cri.go:89] found id: ""
	I1205 20:35:04.840947  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.840960  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:04.840968  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:04.841036  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:04.875901  585602 cri.go:89] found id: ""
	I1205 20:35:04.875933  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.875943  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:04.875949  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:04.876003  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:04.913581  585602 cri.go:89] found id: ""
	I1205 20:35:04.913617  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.913627  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:04.913634  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:04.913689  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:04.952460  585602 cri.go:89] found id: ""
	I1205 20:35:04.952504  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.952519  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:04.952528  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:04.952617  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:04.989939  585602 cri.go:89] found id: ""
	I1205 20:35:04.989968  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.989979  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:04.989985  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:04.990041  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:05.025017  585602 cri.go:89] found id: ""
	I1205 20:35:05.025052  585602 logs.go:282] 0 containers: []
	W1205 20:35:05.025066  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:05.025078  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:05.025094  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:05.068179  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:05.068223  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:05.127311  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:05.127369  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:05.141092  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:05.141129  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:05.217648  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:05.217678  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:05.217691  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:03.163987  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:05.164131  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:07.165804  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:06.043765  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:08.036400  585113 pod_ready.go:82] duration metric: took 4m0.000157493s for pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace to be "Ready" ...
	E1205 20:35:08.036457  585113 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace to be "Ready" (will not retry!)
	I1205 20:35:08.036489  585113 pod_ready.go:39] duration metric: took 4m11.05050249s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:35:08.036554  585113 kubeadm.go:597] duration metric: took 4m18.178903617s to restartPrimaryControlPlane
	W1205 20:35:08.036733  585113 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 20:35:08.036784  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:35:06.621887  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:09.119692  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:07.793457  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:07.808710  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:07.808778  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:07.846331  585602 cri.go:89] found id: ""
	I1205 20:35:07.846366  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.846380  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:07.846389  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:07.846462  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:07.881185  585602 cri.go:89] found id: ""
	I1205 20:35:07.881222  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.881236  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:07.881243  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:07.881307  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:07.918463  585602 cri.go:89] found id: ""
	I1205 20:35:07.918501  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.918514  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:07.918522  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:07.918589  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:07.956329  585602 cri.go:89] found id: ""
	I1205 20:35:07.956364  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.956375  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:07.956385  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:07.956456  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:07.992173  585602 cri.go:89] found id: ""
	I1205 20:35:07.992212  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.992222  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:07.992229  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:07.992318  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:08.030183  585602 cri.go:89] found id: ""
	I1205 20:35:08.030214  585602 logs.go:282] 0 containers: []
	W1205 20:35:08.030226  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:08.030235  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:08.030309  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:08.072320  585602 cri.go:89] found id: ""
	I1205 20:35:08.072362  585602 logs.go:282] 0 containers: []
	W1205 20:35:08.072374  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:08.072382  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:08.072452  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:08.124220  585602 cri.go:89] found id: ""
	I1205 20:35:08.124253  585602 logs.go:282] 0 containers: []
	W1205 20:35:08.124277  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:08.124292  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:08.124310  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:08.171023  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:08.171057  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:08.237645  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:08.237699  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:08.252708  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:08.252744  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:08.343107  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:08.343140  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:08.343158  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:10.919646  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:10.934494  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:10.934562  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:10.971816  585602 cri.go:89] found id: ""
	I1205 20:35:10.971855  585602 logs.go:282] 0 containers: []
	W1205 20:35:10.971868  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:10.971878  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:10.971950  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:11.010031  585602 cri.go:89] found id: ""
	I1205 20:35:11.010071  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.010084  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:11.010095  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:11.010170  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:11.046520  585602 cri.go:89] found id: ""
	I1205 20:35:11.046552  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.046561  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:11.046568  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:11.046632  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:11.081385  585602 cri.go:89] found id: ""
	I1205 20:35:11.081426  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.081440  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:11.081448  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:11.081522  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:11.122529  585602 cri.go:89] found id: ""
	I1205 20:35:11.122559  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.122568  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:11.122576  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:11.122656  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:11.161684  585602 cri.go:89] found id: ""
	I1205 20:35:11.161767  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.161788  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:11.161797  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:11.161862  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:11.199796  585602 cri.go:89] found id: ""
	I1205 20:35:11.199824  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.199833  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:11.199842  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:11.199916  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:11.235580  585602 cri.go:89] found id: ""
	I1205 20:35:11.235617  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.235625  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:11.235635  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:11.235647  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:11.291005  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:11.291055  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:11.305902  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:11.305947  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:11.375862  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:11.375894  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:11.375915  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:11.456701  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:11.456746  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:09.663952  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:11.664200  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:11.119954  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:13.120903  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:15.622247  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:14.006509  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:14.020437  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:14.020531  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:14.056878  585602 cri.go:89] found id: ""
	I1205 20:35:14.056905  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.056915  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:14.056923  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:14.056993  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:14.091747  585602 cri.go:89] found id: ""
	I1205 20:35:14.091782  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.091792  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:14.091800  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:14.091860  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:14.131409  585602 cri.go:89] found id: ""
	I1205 20:35:14.131440  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.131453  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:14.131461  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:14.131532  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:14.170726  585602 cri.go:89] found id: ""
	I1205 20:35:14.170754  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.170765  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:14.170773  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:14.170851  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:14.208619  585602 cri.go:89] found id: ""
	I1205 20:35:14.208654  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.208666  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:14.208674  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:14.208747  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:14.247734  585602 cri.go:89] found id: ""
	I1205 20:35:14.247771  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.247784  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:14.247793  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:14.247855  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:14.296090  585602 cri.go:89] found id: ""
	I1205 20:35:14.296119  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.296129  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:14.296136  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:14.296205  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:14.331009  585602 cri.go:89] found id: ""
	I1205 20:35:14.331037  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.331045  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:14.331057  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:14.331070  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:14.384877  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:14.384935  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:14.400458  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:14.400507  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:14.475745  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:14.475774  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:14.475787  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:14.553150  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:14.553192  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:14.164516  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:16.165316  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:18.119418  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:20.120499  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:17.095700  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:17.109135  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:17.109215  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:17.146805  585602 cri.go:89] found id: ""
	I1205 20:35:17.146838  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.146851  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:17.146861  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:17.146919  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:17.186861  585602 cri.go:89] found id: ""
	I1205 20:35:17.186891  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.186901  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:17.186907  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:17.186960  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:17.223113  585602 cri.go:89] found id: ""
	I1205 20:35:17.223148  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.223159  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:17.223166  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:17.223238  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:17.263066  585602 cri.go:89] found id: ""
	I1205 20:35:17.263098  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.263110  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:17.263118  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:17.263187  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:17.300113  585602 cri.go:89] found id: ""
	I1205 20:35:17.300153  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.300167  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:17.300175  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:17.300237  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:17.339135  585602 cri.go:89] found id: ""
	I1205 20:35:17.339172  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.339184  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:17.339193  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:17.339260  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:17.376200  585602 cri.go:89] found id: ""
	I1205 20:35:17.376229  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.376239  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:17.376248  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:17.376354  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:17.411852  585602 cri.go:89] found id: ""
	I1205 20:35:17.411895  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.411906  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:17.411919  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:17.411948  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:17.463690  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:17.463729  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:17.478912  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:17.478946  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:17.552874  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:17.552907  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:17.552933  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:17.633621  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:17.633667  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:20.175664  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:20.191495  585602 kubeadm.go:597] duration metric: took 4m4.568774806s to restartPrimaryControlPlane
	W1205 20:35:20.191570  585602 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 20:35:20.191594  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:35:20.660014  585602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:35:20.676684  585602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:35:20.688338  585602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:35:20.699748  585602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:35:20.699770  585602 kubeadm.go:157] found existing configuration files:
	
	I1205 20:35:20.699822  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:35:20.710417  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:35:20.710497  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:35:20.722295  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:35:20.732854  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:35:20.732933  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:35:20.744242  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:35:20.754593  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:35:20.754671  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:35:20.766443  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:35:20.777087  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:35:20.777157  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:35:20.788406  585602 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:35:20.869602  585602 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 20:35:20.869778  585602 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:35:21.022417  585602 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:35:21.022558  585602 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:35:21.022715  585602 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:35:21.213817  585602 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:35:21.216995  585602 out.go:235]   - Generating certificates and keys ...
	I1205 20:35:21.217146  585602 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:35:21.217240  585602 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:35:21.217373  585602 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:35:21.217502  585602 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:35:21.217614  585602 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:35:21.217699  585602 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 20:35:21.217784  585602 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:35:21.217876  585602 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:35:21.217985  585602 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:35:21.218129  585602 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:35:21.218186  585602 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 20:35:21.218289  585602 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:35:21.337924  585602 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:35:21.464355  585602 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:35:21.709734  585602 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:35:21.837040  585602 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:35:21.860767  585602 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:35:21.860894  585602 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:35:21.860934  585602 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:35:22.002564  585602 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:35:18.663978  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:20.665113  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:22.622593  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:25.120101  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:22.004407  585602 out.go:235]   - Booting up control plane ...
	I1205 20:35:22.004560  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:35:22.009319  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:35:22.010412  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:35:22.019041  585602 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:35:22.021855  585602 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:35:23.163493  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:25.164833  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:27.164914  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:27.619140  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:29.622476  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:29.664525  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:32.163413  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:34.411201  585113 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.37438104s)
	I1205 20:35:34.411295  585113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:35:34.428580  585113 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:35:34.439233  585113 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:35:34.450165  585113 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:35:34.450192  585113 kubeadm.go:157] found existing configuration files:
	
	I1205 20:35:34.450255  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:35:34.461910  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:35:34.461985  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:35:34.473936  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:35:34.484160  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:35:34.484240  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:35:34.495772  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:35:34.507681  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:35:34.507757  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:35:34.519932  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:35:34.532111  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:35:34.532190  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:35:34.543360  585113 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:35:34.594095  585113 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 20:35:34.594214  585113 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:35:34.712502  585113 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:35:34.712685  585113 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:35:34.712818  585113 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 20:35:34.729419  585113 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:35:34.731281  585113 out.go:235]   - Generating certificates and keys ...
	I1205 20:35:34.731395  585113 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:35:34.731486  585113 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:35:34.731614  585113 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:35:34.731715  585113 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:35:34.731812  585113 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:35:34.731902  585113 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 20:35:34.731994  585113 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:35:34.732082  585113 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:35:34.732179  585113 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:35:34.732252  585113 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:35:34.732336  585113 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 20:35:34.732428  585113 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:35:35.125135  585113 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:35:35.188591  585113 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 20:35:35.330713  585113 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:35:35.497785  585113 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:35:35.839010  585113 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:35:35.839656  585113 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:35:35.842311  585113 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:35:32.118898  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:34.119153  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:34.164007  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:36.164138  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:35.844403  585113 out.go:235]   - Booting up control plane ...
	I1205 20:35:35.844534  585113 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:35:35.844602  585113 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:35:35.845242  585113 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:35:35.865676  585113 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:35:35.871729  585113 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:35:35.871825  585113 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:35:36.007728  585113 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 20:35:36.007948  585113 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 20:35:36.510090  585113 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.141078ms
	I1205 20:35:36.510208  585113 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 20:35:36.119432  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:38.121093  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:40.620523  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:41.512166  585113 kubeadm.go:310] [api-check] The API server is healthy after 5.00243802s
	I1205 20:35:41.529257  585113 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:35:41.545958  585113 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:35:41.585500  585113 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:35:41.585726  585113 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-789000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:35:41.606394  585113 kubeadm.go:310] [bootstrap-token] Using token: j30n5x.myrhz9pya6yl1f1z
	I1205 20:35:41.608046  585113 out.go:235]   - Configuring RBAC rules ...
	I1205 20:35:41.608229  585113 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:35:41.616083  585113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:35:41.625777  585113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:35:41.629934  585113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:35:41.633726  585113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:35:41.640454  585113 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:35:41.923125  585113 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:35:42.363841  585113 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 20:35:42.924569  585113 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 20:35:42.924594  585113 kubeadm.go:310] 
	I1205 20:35:42.924660  585113 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 20:35:42.924668  585113 kubeadm.go:310] 
	I1205 20:35:42.924750  585113 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 20:35:42.924768  585113 kubeadm.go:310] 
	I1205 20:35:42.924802  585113 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 20:35:42.924865  585113 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:35:42.924926  585113 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:35:42.924969  585113 kubeadm.go:310] 
	I1205 20:35:42.925060  585113 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 20:35:42.925069  585113 kubeadm.go:310] 
	I1205 20:35:42.925120  585113 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:35:42.925154  585113 kubeadm.go:310] 
	I1205 20:35:42.925255  585113 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 20:35:42.925374  585113 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:35:42.925477  585113 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:35:42.925488  585113 kubeadm.go:310] 
	I1205 20:35:42.925604  585113 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:35:42.925691  585113 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 20:35:42.925701  585113 kubeadm.go:310] 
	I1205 20:35:42.925830  585113 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token j30n5x.myrhz9pya6yl1f1z \
	I1205 20:35:42.925966  585113 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 \
	I1205 20:35:42.926019  585113 kubeadm.go:310] 	--control-plane 
	I1205 20:35:42.926034  585113 kubeadm.go:310] 
	I1205 20:35:42.926136  585113 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:35:42.926147  585113 kubeadm.go:310] 
	I1205 20:35:42.926258  585113 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token j30n5x.myrhz9pya6yl1f1z \
	I1205 20:35:42.926400  585113 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 
	I1205 20:35:42.927105  585113 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:35:42.927269  585113 cni.go:84] Creating CNI manager for ""
	I1205 20:35:42.927283  585113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:35:42.929046  585113 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:35:38.164698  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:40.665499  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:42.930620  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:35:42.941706  585113 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:35:42.964041  585113 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:35:42.964154  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:42.964191  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-789000 minikube.k8s.io/updated_at=2024_12_05T20_35_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=embed-certs-789000 minikube.k8s.io/primary=true
	I1205 20:35:43.027876  585113 ops.go:34] apiserver oom_adj: -16
	I1205 20:35:43.203087  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:43.703446  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:44.203895  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:44.703277  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:45.203421  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:42.623820  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:45.118957  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:45.704129  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:46.203682  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:46.703213  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:47.203225  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:47.330051  585113 kubeadm.go:1113] duration metric: took 4.365966546s to wait for elevateKubeSystemPrivileges
	I1205 20:35:47.330104  585113 kubeadm.go:394] duration metric: took 4m57.530103825s to StartCluster
	I1205 20:35:47.330143  585113 settings.go:142] acquiring lock: {Name:mk53b9e6d652790a330d8f10370186624dd74692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:35:47.330296  585113 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:35:47.332937  585113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:35:47.333273  585113 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:35:47.333380  585113 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 20:35:47.333478  585113 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-789000"
	I1205 20:35:47.333500  585113 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-789000"
	I1205 20:35:47.333499  585113 addons.go:69] Setting default-storageclass=true in profile "embed-certs-789000"
	W1205 20:35:47.333510  585113 addons.go:243] addon storage-provisioner should already be in state true
	I1205 20:35:47.333523  585113 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-789000"
	I1205 20:35:47.333545  585113 host.go:66] Checking if "embed-certs-789000" exists ...
	I1205 20:35:47.333554  585113 config.go:182] Loaded profile config "embed-certs-789000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:35:47.333631  585113 addons.go:69] Setting metrics-server=true in profile "embed-certs-789000"
	I1205 20:35:47.333651  585113 addons.go:234] Setting addon metrics-server=true in "embed-certs-789000"
	W1205 20:35:47.333660  585113 addons.go:243] addon metrics-server should already be in state true
	I1205 20:35:47.333692  585113 host.go:66] Checking if "embed-certs-789000" exists ...
	I1205 20:35:47.334001  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.334043  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.334003  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.334101  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.334157  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.334339  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.335448  585113 out.go:177] * Verifying Kubernetes components...
	I1205 20:35:47.337056  585113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:35:47.353039  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33827
	I1205 20:35:47.353726  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.354437  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.354467  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.354870  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.355580  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.355654  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.355702  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43665
	I1205 20:35:47.355760  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46205
	I1205 20:35:47.356180  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.356224  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.356771  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.356796  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.356815  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.356834  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.357246  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.357245  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.357640  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetState
	I1205 20:35:47.357862  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.357916  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.361951  585113 addons.go:234] Setting addon default-storageclass=true in "embed-certs-789000"
	W1205 20:35:47.361974  585113 addons.go:243] addon default-storageclass should already be in state true
	I1205 20:35:47.362004  585113 host.go:66] Checking if "embed-certs-789000" exists ...
	I1205 20:35:47.362369  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.362416  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.372862  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37823
	I1205 20:35:47.373465  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.373983  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.374011  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.374347  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.374570  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetState
	I1205 20:35:47.376329  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:35:47.378476  585113 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:35:47.379882  585113 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:35:47.379909  585113 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:35:47.379933  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:35:47.382045  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44707
	I1205 20:35:47.382855  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.383440  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.383459  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.383563  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.383828  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.384092  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetState
	I1205 20:35:47.384101  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:35:47.384117  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.384150  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39829
	I1205 20:35:47.384381  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:35:47.384517  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:35:47.384635  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.384705  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:35:47.384850  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:35:47.385249  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.385262  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.385613  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.385744  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:35:47.386054  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.386085  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.387649  585113 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:35:43.164980  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:45.665449  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:47.665725  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:47.388998  585113 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:35:47.389011  585113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:35:47.389025  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:35:47.391724  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.392285  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:35:47.392317  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.392362  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:35:47.392521  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:35:47.392663  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:35:47.392804  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:35:47.402558  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45343
	I1205 20:35:47.403109  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.403636  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.403653  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.403977  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.404155  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetState
	I1205 20:35:47.405636  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:35:47.405859  585113 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:35:47.405876  585113 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:35:47.405894  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:35:47.408366  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.408827  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:35:47.408868  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.409107  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:35:47.409276  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:35:47.409436  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:35:47.409577  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:35:47.589046  585113 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:35:47.620164  585113 node_ready.go:35] waiting up to 6m0s for node "embed-certs-789000" to be "Ready" ...
	I1205 20:35:47.635800  585113 node_ready.go:49] node "embed-certs-789000" has status "Ready":"True"
	I1205 20:35:47.635824  585113 node_ready.go:38] duration metric: took 15.625152ms for node "embed-certs-789000" to be "Ready" ...
	I1205 20:35:47.635836  585113 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:35:47.647842  585113 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6mp2h" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:47.738529  585113 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:35:47.738558  585113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:35:47.741247  585113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:35:47.741443  585113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:35:47.822503  585113 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:35:47.822543  585113 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:35:47.886482  585113 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:35:47.886512  585113 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:35:47.926018  585113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:35:48.100013  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:48.100059  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:48.100371  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:48.100392  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:48.100408  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:48.100416  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:48.102261  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Closing plugin on server side
	I1205 20:35:48.102313  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:48.102342  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:48.115407  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:48.115429  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:48.115762  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:48.115859  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:48.115870  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Closing plugin on server side
	I1205 20:35:48.721035  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:48.721068  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:48.721380  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:48.721400  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:48.721447  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:48.721465  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:48.721855  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Closing plugin on server side
	I1205 20:35:48.721868  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:48.721880  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:49.294512  585113 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.36844122s)
	I1205 20:35:49.294581  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:49.294598  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:49.294953  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Closing plugin on server side
	I1205 20:35:49.295014  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:49.295028  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:49.295057  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:49.295071  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:49.295341  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Closing plugin on server side
	I1205 20:35:49.295391  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:49.295403  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:49.295414  585113 addons.go:475] Verifying addon metrics-server=true in "embed-certs-789000"
	I1205 20:35:49.297183  585113 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:35:49.298509  585113 addons.go:510] duration metric: took 1.965140064s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:35:49.657195  585113 pod_ready.go:103] pod "coredns-7c65d6cfc9-6mp2h" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:47.121445  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:49.622568  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:50.163712  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:52.165654  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:52.155012  585113 pod_ready.go:103] pod "coredns-7c65d6cfc9-6mp2h" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:54.155309  585113 pod_ready.go:93] pod "coredns-7c65d6cfc9-6mp2h" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:54.155346  585113 pod_ready.go:82] duration metric: took 6.507465102s for pod "coredns-7c65d6cfc9-6mp2h" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:54.155356  585113 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rh6pj" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:54.160866  585113 pod_ready.go:93] pod "coredns-7c65d6cfc9-rh6pj" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:54.160895  585113 pod_ready.go:82] duration metric: took 5.529623ms for pod "coredns-7c65d6cfc9-rh6pj" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:54.160909  585113 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:54.166444  585113 pod_ready.go:93] pod "etcd-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:54.166475  585113 pod_ready.go:82] duration metric: took 5.558605ms for pod "etcd-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:54.166487  585113 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:52.118202  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:54.119543  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:54.664661  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:57.162802  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:56.172832  585113 pod_ready.go:103] pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:57.173005  585113 pod_ready.go:93] pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:57.173052  585113 pod_ready.go:82] duration metric: took 3.006542827s for pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.173068  585113 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.178461  585113 pod_ready.go:93] pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:57.178489  585113 pod_ready.go:82] duration metric: took 5.413563ms for pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.178499  585113 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-znjpk" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.183130  585113 pod_ready.go:93] pod "kube-proxy-znjpk" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:57.183162  585113 pod_ready.go:82] duration metric: took 4.655743ms for pod "kube-proxy-znjpk" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.183178  585113 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.351816  585113 pod_ready.go:93] pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:57.351842  585113 pod_ready.go:82] duration metric: took 168.656328ms for pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.351851  585113 pod_ready.go:39] duration metric: took 9.716003373s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:35:57.351866  585113 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:35:57.351921  585113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:57.368439  585113 api_server.go:72] duration metric: took 10.035127798s to wait for apiserver process to appear ...
	I1205 20:35:57.368471  585113 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:35:57.368496  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:35:57.372531  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I1205 20:35:57.373449  585113 api_server.go:141] control plane version: v1.31.2
	I1205 20:35:57.373466  585113 api_server.go:131] duration metric: took 4.987422ms to wait for apiserver health ...
	I1205 20:35:57.373474  585113 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:35:57.554591  585113 system_pods.go:59] 9 kube-system pods found
	I1205 20:35:57.554620  585113 system_pods.go:61] "coredns-7c65d6cfc9-6mp2h" [01aaefd9-c549-4065-b3dd-a0e4d925e592] Running
	I1205 20:35:57.554625  585113 system_pods.go:61] "coredns-7c65d6cfc9-rh6pj" [4bdd8a47-abec-4dc4-a1ed-4a9a124417a3] Running
	I1205 20:35:57.554629  585113 system_pods.go:61] "etcd-embed-certs-789000" [356d7981-ab7a-40bf-866f-0285986f9a8d] Running
	I1205 20:35:57.554633  585113 system_pods.go:61] "kube-apiserver-embed-certs-789000" [bddc43d8-26f1-462b-a90b-8a4093bbb427] Running
	I1205 20:35:57.554637  585113 system_pods.go:61] "kube-controller-manager-embed-certs-789000" [800f92d7-e6e2-4cb8-9cc7-90595f4b512b] Running
	I1205 20:35:57.554640  585113 system_pods.go:61] "kube-proxy-znjpk" [f3df1a22-d7e0-4a83-84dd-0e710185ded6] Running
	I1205 20:35:57.554643  585113 system_pods.go:61] "kube-scheduler-embed-certs-789000" [327e3f02-3092-49fb-bfac-fc0485f02db3] Running
	I1205 20:35:57.554649  585113 system_pods.go:61] "metrics-server-6867b74b74-cs42k" [98b266c3-8ff0-4dc6-9c43-374dcd7c074a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:35:57.554653  585113 system_pods.go:61] "storage-provisioner" [2808c8da-8904-45a0-ae68-bfd68681540f] Running
	I1205 20:35:57.554660  585113 system_pods.go:74] duration metric: took 181.180919ms to wait for pod list to return data ...
	I1205 20:35:57.554667  585113 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:35:57.757196  585113 default_sa.go:45] found service account: "default"
	I1205 20:35:57.757226  585113 default_sa.go:55] duration metric: took 202.553823ms for default service account to be created ...
	I1205 20:35:57.757236  585113 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:35:57.956943  585113 system_pods.go:86] 9 kube-system pods found
	I1205 20:35:57.956976  585113 system_pods.go:89] "coredns-7c65d6cfc9-6mp2h" [01aaefd9-c549-4065-b3dd-a0e4d925e592] Running
	I1205 20:35:57.956982  585113 system_pods.go:89] "coredns-7c65d6cfc9-rh6pj" [4bdd8a47-abec-4dc4-a1ed-4a9a124417a3] Running
	I1205 20:35:57.956985  585113 system_pods.go:89] "etcd-embed-certs-789000" [356d7981-ab7a-40bf-866f-0285986f9a8d] Running
	I1205 20:35:57.956989  585113 system_pods.go:89] "kube-apiserver-embed-certs-789000" [bddc43d8-26f1-462b-a90b-8a4093bbb427] Running
	I1205 20:35:57.956992  585113 system_pods.go:89] "kube-controller-manager-embed-certs-789000" [800f92d7-e6e2-4cb8-9cc7-90595f4b512b] Running
	I1205 20:35:57.956996  585113 system_pods.go:89] "kube-proxy-znjpk" [f3df1a22-d7e0-4a83-84dd-0e710185ded6] Running
	I1205 20:35:57.956999  585113 system_pods.go:89] "kube-scheduler-embed-certs-789000" [327e3f02-3092-49fb-bfac-fc0485f02db3] Running
	I1205 20:35:57.957005  585113 system_pods.go:89] "metrics-server-6867b74b74-cs42k" [98b266c3-8ff0-4dc6-9c43-374dcd7c074a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:35:57.957010  585113 system_pods.go:89] "storage-provisioner" [2808c8da-8904-45a0-ae68-bfd68681540f] Running
	I1205 20:35:57.957019  585113 system_pods.go:126] duration metric: took 199.777723ms to wait for k8s-apps to be running ...
	I1205 20:35:57.957028  585113 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:35:57.957079  585113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:35:57.971959  585113 system_svc.go:56] duration metric: took 14.916307ms WaitForService to wait for kubelet
	I1205 20:35:57.972000  585113 kubeadm.go:582] duration metric: took 10.638693638s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:35:57.972027  585113 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:35:58.153272  585113 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:35:58.153302  585113 node_conditions.go:123] node cpu capacity is 2
	I1205 20:35:58.153323  585113 node_conditions.go:105] duration metric: took 181.282208ms to run NodePressure ...
	I1205 20:35:58.153338  585113 start.go:241] waiting for startup goroutines ...
	I1205 20:35:58.153348  585113 start.go:246] waiting for cluster config update ...
	I1205 20:35:58.153361  585113 start.go:255] writing updated cluster config ...
	I1205 20:35:58.153689  585113 ssh_runner.go:195] Run: rm -f paused
	I1205 20:35:58.206377  585113 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 20:35:58.208199  585113 out.go:177] * Done! kubectl is now configured to use "embed-certs-789000" cluster and "default" namespace by default
	I1205 20:35:56.626799  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:59.119621  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:59.164803  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:01.663254  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:01.119680  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:03.121023  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:05.121537  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:02.025194  585602 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 20:36:02.025306  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:36:02.025498  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:36:03.664172  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:05.672410  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:07.623229  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:10.119845  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:07.025608  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:36:07.025922  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:36:08.164875  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:10.665374  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:12.622566  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:15.120084  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:13.163662  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:15.164021  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:17.164514  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:17.619629  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:19.620524  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:17.026490  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:36:17.026747  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:36:19.663904  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:22.164514  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:21.621019  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:24.119524  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:24.164932  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:26.670748  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:26.119795  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:27.113870  585025 pod_ready.go:82] duration metric: took 4m0.000886242s for pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace to be "Ready" ...
	E1205 20:36:27.113920  585025 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace to be "Ready" (will not retry!)
	I1205 20:36:27.113943  585025 pod_ready.go:39] duration metric: took 4m14.547292745s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:36:27.113975  585025 kubeadm.go:597] duration metric: took 4m21.939840666s to restartPrimaryControlPlane
	W1205 20:36:27.114068  585025 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 20:36:27.114099  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:36:29.163499  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:29.664158  585929 pod_ready.go:82] duration metric: took 4m0.007168384s for pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace to be "Ready" ...
	E1205 20:36:29.664191  585929 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 20:36:29.664201  585929 pod_ready.go:39] duration metric: took 4m2.00733866s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:36:29.664226  585929 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:36:29.664290  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:36:29.664377  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:36:29.712790  585929 cri.go:89] found id: "83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:29.712814  585929 cri.go:89] found id: "e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:29.712819  585929 cri.go:89] found id: ""
	I1205 20:36:29.712826  585929 logs.go:282] 2 containers: [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36]
	I1205 20:36:29.712879  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.717751  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.721968  585929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:36:29.722045  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:36:29.770289  585929 cri.go:89] found id: "62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:29.770322  585929 cri.go:89] found id: ""
	I1205 20:36:29.770330  585929 logs.go:282] 1 containers: [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff]
	I1205 20:36:29.770392  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.775391  585929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:36:29.775475  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:36:29.816354  585929 cri.go:89] found id: "dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:29.816380  585929 cri.go:89] found id: ""
	I1205 20:36:29.816388  585929 logs.go:282] 1 containers: [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f]
	I1205 20:36:29.816454  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.821546  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:36:29.821621  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:36:29.870442  585929 cri.go:89] found id: "40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:29.870467  585929 cri.go:89] found id: ""
	I1205 20:36:29.870476  585929 logs.go:282] 1 containers: [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d]
	I1205 20:36:29.870541  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.875546  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:36:29.875658  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:36:29.924567  585929 cri.go:89] found id: "444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:29.924595  585929 cri.go:89] found id: ""
	I1205 20:36:29.924603  585929 logs.go:282] 1 containers: [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43]
	I1205 20:36:29.924666  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.929148  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:36:29.929216  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:36:29.968092  585929 cri.go:89] found id: "18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:29.968122  585929 cri.go:89] found id: "587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:29.968126  585929 cri.go:89] found id: ""
	I1205 20:36:29.968134  585929 logs.go:282] 2 containers: [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66]
	I1205 20:36:29.968186  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.973062  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.977693  585929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:36:29.977762  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:36:30.014944  585929 cri.go:89] found id: ""
	I1205 20:36:30.014982  585929 logs.go:282] 0 containers: []
	W1205 20:36:30.014994  585929 logs.go:284] No container was found matching "kindnet"
	I1205 20:36:30.015002  585929 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:36:30.015101  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:36:30.062304  585929 cri.go:89] found id: "e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:30.062328  585929 cri.go:89] found id: "dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:30.062332  585929 cri.go:89] found id: ""
	I1205 20:36:30.062339  585929 logs.go:282] 2 containers: [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c]
	I1205 20:36:30.062394  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:30.067152  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:30.071767  585929 logs.go:123] Gathering logs for kube-apiserver [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d] ...
	I1205 20:36:30.071788  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:30.125030  585929 logs.go:123] Gathering logs for etcd [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff] ...
	I1205 20:36:30.125069  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:30.167607  585929 logs.go:123] Gathering logs for kube-scheduler [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d] ...
	I1205 20:36:30.167641  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:30.217522  585929 logs.go:123] Gathering logs for kube-controller-manager [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c] ...
	I1205 20:36:30.217558  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:30.298655  585929 logs.go:123] Gathering logs for kube-controller-manager [587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66] ...
	I1205 20:36:30.298695  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:30.346687  585929 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:36:30.346721  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:36:30.887069  585929 logs.go:123] Gathering logs for dmesg ...
	I1205 20:36:30.887126  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:36:30.907313  585929 logs.go:123] Gathering logs for kube-apiserver [e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36] ...
	I1205 20:36:30.907360  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:30.950285  585929 logs.go:123] Gathering logs for coredns [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f] ...
	I1205 20:36:30.950326  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:30.990895  585929 logs.go:123] Gathering logs for storage-provisioner [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8] ...
	I1205 20:36:30.990929  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:31.032950  585929 logs.go:123] Gathering logs for kubelet ...
	I1205 20:36:31.033010  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:36:31.115132  585929 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:36:31.115176  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:36:31.257760  585929 logs.go:123] Gathering logs for kube-proxy [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43] ...
	I1205 20:36:31.257797  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:31.300521  585929 logs.go:123] Gathering logs for storage-provisioner [dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c] ...
	I1205 20:36:31.300553  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:31.338339  585929 logs.go:123] Gathering logs for container status ...
	I1205 20:36:31.338373  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:36:33.892406  585929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:36:33.908917  585929 api_server.go:72] duration metric: took 4m14.472283422s to wait for apiserver process to appear ...
	I1205 20:36:33.908950  585929 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:36:33.908993  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:36:33.909067  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:36:33.958461  585929 cri.go:89] found id: "83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:33.958496  585929 cri.go:89] found id: "e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:33.958502  585929 cri.go:89] found id: ""
	I1205 20:36:33.958511  585929 logs.go:282] 2 containers: [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36]
	I1205 20:36:33.958585  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:33.963333  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:33.969472  585929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:36:33.969549  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:36:34.010687  585929 cri.go:89] found id: "62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:34.010711  585929 cri.go:89] found id: ""
	I1205 20:36:34.010721  585929 logs.go:282] 1 containers: [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff]
	I1205 20:36:34.010790  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.016468  585929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:36:34.016557  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:36:34.056627  585929 cri.go:89] found id: "dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:34.056656  585929 cri.go:89] found id: ""
	I1205 20:36:34.056666  585929 logs.go:282] 1 containers: [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f]
	I1205 20:36:34.056729  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.061343  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:36:34.061411  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:36:34.099534  585929 cri.go:89] found id: "40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:34.099563  585929 cri.go:89] found id: ""
	I1205 20:36:34.099573  585929 logs.go:282] 1 containers: [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d]
	I1205 20:36:34.099643  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.104828  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:36:34.104891  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:36:34.150749  585929 cri.go:89] found id: "444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:34.150781  585929 cri.go:89] found id: ""
	I1205 20:36:34.150792  585929 logs.go:282] 1 containers: [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43]
	I1205 20:36:34.150863  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.155718  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:36:34.155797  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:36:34.202896  585929 cri.go:89] found id: "18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:34.202927  585929 cri.go:89] found id: "587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:34.202934  585929 cri.go:89] found id: ""
	I1205 20:36:34.202943  585929 logs.go:282] 2 containers: [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66]
	I1205 20:36:34.203028  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.207791  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.212163  585929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:36:34.212243  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:36:34.254423  585929 cri.go:89] found id: ""
	I1205 20:36:34.254458  585929 logs.go:282] 0 containers: []
	W1205 20:36:34.254470  585929 logs.go:284] No container was found matching "kindnet"
	I1205 20:36:34.254479  585929 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:36:34.254549  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:36:34.294704  585929 cri.go:89] found id: "e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:34.294737  585929 cri.go:89] found id: "dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:34.294741  585929 cri.go:89] found id: ""
	I1205 20:36:34.294753  585929 logs.go:282] 2 containers: [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c]
	I1205 20:36:34.294820  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.299361  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.305411  585929 logs.go:123] Gathering logs for kube-apiserver [e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36] ...
	I1205 20:36:34.305437  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:34.357438  585929 logs.go:123] Gathering logs for etcd [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff] ...
	I1205 20:36:34.357472  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:34.405858  585929 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:36:34.405893  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:36:34.898506  585929 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:36:34.898551  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:36:35.009818  585929 logs.go:123] Gathering logs for coredns [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f] ...
	I1205 20:36:35.009856  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:35.048852  585929 logs.go:123] Gathering logs for kube-controller-manager [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c] ...
	I1205 20:36:35.048882  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:35.100458  585929 logs.go:123] Gathering logs for kube-controller-manager [587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66] ...
	I1205 20:36:35.100511  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:35.139923  585929 logs.go:123] Gathering logs for container status ...
	I1205 20:36:35.139959  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:36:35.184818  585929 logs.go:123] Gathering logs for kubelet ...
	I1205 20:36:35.184852  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:36:35.265196  585929 logs.go:123] Gathering logs for dmesg ...
	I1205 20:36:35.265238  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:36:35.280790  585929 logs.go:123] Gathering logs for kube-proxy [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43] ...
	I1205 20:36:35.280830  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:35.323308  585929 logs.go:123] Gathering logs for storage-provisioner [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8] ...
	I1205 20:36:35.323343  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:35.364578  585929 logs.go:123] Gathering logs for kube-apiserver [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d] ...
	I1205 20:36:35.364610  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:35.411413  585929 logs.go:123] Gathering logs for kube-scheduler [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d] ...
	I1205 20:36:35.411456  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:35.458077  585929 logs.go:123] Gathering logs for storage-provisioner [dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c] ...
	I1205 20:36:35.458117  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:37.997701  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:36:38.003308  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 200:
	ok
	I1205 20:36:38.004465  585929 api_server.go:141] control plane version: v1.31.2
	I1205 20:36:38.004495  585929 api_server.go:131] duration metric: took 4.095536578s to wait for apiserver health ...
	I1205 20:36:38.004505  585929 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:36:38.004532  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:36:38.004598  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:36:37.027599  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:36:37.027910  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:36:38.048388  585929 cri.go:89] found id: "83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:38.048427  585929 cri.go:89] found id: "e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:38.048434  585929 cri.go:89] found id: ""
	I1205 20:36:38.048442  585929 logs.go:282] 2 containers: [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36]
	I1205 20:36:38.048514  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.052931  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.057338  585929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:36:38.057403  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:36:38.097715  585929 cri.go:89] found id: "62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:38.097750  585929 cri.go:89] found id: ""
	I1205 20:36:38.097761  585929 logs.go:282] 1 containers: [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff]
	I1205 20:36:38.097830  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.104038  585929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:36:38.104110  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:36:38.148485  585929 cri.go:89] found id: "dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:38.148510  585929 cri.go:89] found id: ""
	I1205 20:36:38.148519  585929 logs.go:282] 1 containers: [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f]
	I1205 20:36:38.148585  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.153619  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:36:38.153702  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:36:38.190467  585929 cri.go:89] found id: "40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:38.190495  585929 cri.go:89] found id: ""
	I1205 20:36:38.190505  585929 logs.go:282] 1 containers: [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d]
	I1205 20:36:38.190561  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.195177  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:36:38.195259  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:36:38.240020  585929 cri.go:89] found id: "444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:38.240045  585929 cri.go:89] found id: ""
	I1205 20:36:38.240054  585929 logs.go:282] 1 containers: [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43]
	I1205 20:36:38.240123  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.244359  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:36:38.244425  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:36:38.282241  585929 cri.go:89] found id: "18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:38.282267  585929 cri.go:89] found id: "587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:38.282284  585929 cri.go:89] found id: ""
	I1205 20:36:38.282292  585929 logs.go:282] 2 containers: [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66]
	I1205 20:36:38.282357  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.287437  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.291561  585929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:36:38.291621  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:36:38.333299  585929 cri.go:89] found id: ""
	I1205 20:36:38.333335  585929 logs.go:282] 0 containers: []
	W1205 20:36:38.333345  585929 logs.go:284] No container was found matching "kindnet"
	I1205 20:36:38.333352  585929 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:36:38.333411  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:36:38.370920  585929 cri.go:89] found id: "e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:38.370948  585929 cri.go:89] found id: "dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:38.370952  585929 cri.go:89] found id: ""
	I1205 20:36:38.370960  585929 logs.go:282] 2 containers: [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c]
	I1205 20:36:38.371037  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.375549  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.379517  585929 logs.go:123] Gathering logs for kube-controller-manager [587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66] ...
	I1205 20:36:38.379548  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:38.416990  585929 logs.go:123] Gathering logs for kubelet ...
	I1205 20:36:38.417023  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:36:38.499859  585929 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:36:38.499905  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:36:38.625291  585929 logs.go:123] Gathering logs for kube-scheduler [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d] ...
	I1205 20:36:38.625332  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:38.672549  585929 logs.go:123] Gathering logs for coredns [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f] ...
	I1205 20:36:38.672586  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:38.710017  585929 logs.go:123] Gathering logs for storage-provisioner [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8] ...
	I1205 20:36:38.710055  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:38.754004  585929 logs.go:123] Gathering logs for container status ...
	I1205 20:36:38.754049  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:36:38.802163  585929 logs.go:123] Gathering logs for dmesg ...
	I1205 20:36:38.802206  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:36:38.817670  585929 logs.go:123] Gathering logs for kube-apiserver [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d] ...
	I1205 20:36:38.817704  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:38.864833  585929 logs.go:123] Gathering logs for etcd [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff] ...
	I1205 20:36:38.864875  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:38.909490  585929 logs.go:123] Gathering logs for storage-provisioner [dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c] ...
	I1205 20:36:38.909526  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:38.952117  585929 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:36:38.952164  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:36:39.347620  585929 logs.go:123] Gathering logs for kube-apiserver [e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36] ...
	I1205 20:36:39.347686  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:39.392412  585929 logs.go:123] Gathering logs for kube-proxy [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43] ...
	I1205 20:36:39.392450  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:39.433711  585929 logs.go:123] Gathering logs for kube-controller-manager [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c] ...
	I1205 20:36:39.433749  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:41.996602  585929 system_pods.go:59] 8 kube-system pods found
	I1205 20:36:41.996634  585929 system_pods.go:61] "coredns-7c65d6cfc9-5drgc" [4adbcbc8-0974-4ed3-90d4-fc7f75ff83b6] Running
	I1205 20:36:41.996640  585929 system_pods.go:61] "etcd-default-k8s-diff-port-942599" [4041a965-abf4-45b3-a180-118601e72573] Running
	I1205 20:36:41.996644  585929 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-942599" [ae1d7788-4feb-4e02-b0b2-bcaff984ff99] Running
	I1205 20:36:41.996648  585929 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-942599" [5cfb734e-5a10-4066-95a1-b884817a0aea] Running
	I1205 20:36:41.996651  585929 system_pods.go:61] "kube-proxy-5vdcq" [be2e18fd-6980-45c9-87a4-f6d1ed31bf7b] Running
	I1205 20:36:41.996654  585929 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-942599" [8deda727-a6c3-4523-8755-76217f6a8ddb] Running
	I1205 20:36:41.996661  585929 system_pods.go:61] "metrics-server-6867b74b74-rq8xm" [99b577fd-fbfd-4178-8b06-ef96f118c30b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:36:41.996665  585929 system_pods.go:61] "storage-provisioner" [8a858ec2-dc10-4501-8efa-72e2ea0c7927] Running
	I1205 20:36:41.996674  585929 system_pods.go:74] duration metric: took 3.992162062s to wait for pod list to return data ...
	I1205 20:36:41.996682  585929 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:36:41.999553  585929 default_sa.go:45] found service account: "default"
	I1205 20:36:41.999580  585929 default_sa.go:55] duration metric: took 2.889197ms for default service account to be created ...
	I1205 20:36:41.999589  585929 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:36:42.005061  585929 system_pods.go:86] 8 kube-system pods found
	I1205 20:36:42.005099  585929 system_pods.go:89] "coredns-7c65d6cfc9-5drgc" [4adbcbc8-0974-4ed3-90d4-fc7f75ff83b6] Running
	I1205 20:36:42.005111  585929 system_pods.go:89] "etcd-default-k8s-diff-port-942599" [4041a965-abf4-45b3-a180-118601e72573] Running
	I1205 20:36:42.005118  585929 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-942599" [ae1d7788-4feb-4e02-b0b2-bcaff984ff99] Running
	I1205 20:36:42.005126  585929 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-942599" [5cfb734e-5a10-4066-95a1-b884817a0aea] Running
	I1205 20:36:42.005135  585929 system_pods.go:89] "kube-proxy-5vdcq" [be2e18fd-6980-45c9-87a4-f6d1ed31bf7b] Running
	I1205 20:36:42.005143  585929 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-942599" [8deda727-a6c3-4523-8755-76217f6a8ddb] Running
	I1205 20:36:42.005159  585929 system_pods.go:89] "metrics-server-6867b74b74-rq8xm" [99b577fd-fbfd-4178-8b06-ef96f118c30b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:36:42.005171  585929 system_pods.go:89] "storage-provisioner" [8a858ec2-dc10-4501-8efa-72e2ea0c7927] Running
	I1205 20:36:42.005187  585929 system_pods.go:126] duration metric: took 5.591652ms to wait for k8s-apps to be running ...
	I1205 20:36:42.005201  585929 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:36:42.005267  585929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:36:42.021323  585929 system_svc.go:56] duration metric: took 16.10852ms WaitForService to wait for kubelet
	I1205 20:36:42.021358  585929 kubeadm.go:582] duration metric: took 4m22.584731606s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:36:42.021424  585929 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:36:42.024632  585929 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:36:42.024658  585929 node_conditions.go:123] node cpu capacity is 2
	I1205 20:36:42.024682  585929 node_conditions.go:105] duration metric: took 3.248548ms to run NodePressure ...
	I1205 20:36:42.024698  585929 start.go:241] waiting for startup goroutines ...
	I1205 20:36:42.024709  585929 start.go:246] waiting for cluster config update ...
	I1205 20:36:42.024742  585929 start.go:255] writing updated cluster config ...
	I1205 20:36:42.025047  585929 ssh_runner.go:195] Run: rm -f paused
	I1205 20:36:42.077303  585929 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 20:36:42.079398  585929 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-942599" cluster and "default" namespace by default
	I1205 20:36:53.411276  585025 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.297141231s)
	I1205 20:36:53.411423  585025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:36:53.432474  585025 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:36:53.443908  585025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:36:53.454789  585025 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:36:53.454821  585025 kubeadm.go:157] found existing configuration files:
	
	I1205 20:36:53.454873  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:36:53.465648  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:36:53.465719  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:36:53.476492  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:36:53.486436  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:36:53.486505  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:36:53.499146  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:36:53.510237  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:36:53.510324  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:36:53.521186  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:36:53.531797  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:36:53.531890  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:36:53.543056  585025 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:36:53.735019  585025 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:37:01.531096  585025 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 20:37:01.531179  585025 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:37:01.531278  585025 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:37:01.531407  585025 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:37:01.531546  585025 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 20:37:01.531635  585025 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:37:01.533284  585025 out.go:235]   - Generating certificates and keys ...
	I1205 20:37:01.533400  585025 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:37:01.533484  585025 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:37:01.533589  585025 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:37:01.533676  585025 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:37:01.533741  585025 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:37:01.533820  585025 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 20:37:01.533901  585025 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:37:01.533954  585025 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:37:01.534023  585025 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:37:01.534097  585025 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:37:01.534137  585025 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 20:37:01.534193  585025 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:37:01.534264  585025 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:37:01.534347  585025 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 20:37:01.534414  585025 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:37:01.534479  585025 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:37:01.534529  585025 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:37:01.534600  585025 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:37:01.534656  585025 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:37:01.536208  585025 out.go:235]   - Booting up control plane ...
	I1205 20:37:01.536326  585025 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:37:01.536394  585025 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:37:01.536487  585025 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:37:01.536653  585025 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:37:01.536772  585025 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:37:01.536814  585025 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:37:01.536987  585025 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 20:37:01.537144  585025 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 20:37:01.537240  585025 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.640403ms
	I1205 20:37:01.537352  585025 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 20:37:01.537438  585025 kubeadm.go:310] [api-check] The API server is healthy after 5.002069704s
	I1205 20:37:01.537566  585025 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:37:01.537705  585025 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:37:01.537766  585025 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:37:01.537959  585025 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-816185 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:37:01.538037  585025 kubeadm.go:310] [bootstrap-token] Using token: l8cx4j.koqnwrdaqrc08irs
	I1205 20:37:01.539683  585025 out.go:235]   - Configuring RBAC rules ...
	I1205 20:37:01.539813  585025 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:37:01.539945  585025 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:37:01.540157  585025 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:37:01.540346  585025 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:37:01.540482  585025 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:37:01.540602  585025 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:37:01.540746  585025 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:37:01.540818  585025 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 20:37:01.540905  585025 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 20:37:01.540922  585025 kubeadm.go:310] 
	I1205 20:37:01.541012  585025 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 20:37:01.541027  585025 kubeadm.go:310] 
	I1205 20:37:01.541149  585025 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 20:37:01.541160  585025 kubeadm.go:310] 
	I1205 20:37:01.541197  585025 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 20:37:01.541253  585025 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:37:01.541297  585025 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:37:01.541303  585025 kubeadm.go:310] 
	I1205 20:37:01.541365  585025 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 20:37:01.541371  585025 kubeadm.go:310] 
	I1205 20:37:01.541417  585025 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:37:01.541427  585025 kubeadm.go:310] 
	I1205 20:37:01.541486  585025 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 20:37:01.541593  585025 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:37:01.541689  585025 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:37:01.541707  585025 kubeadm.go:310] 
	I1205 20:37:01.541811  585025 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:37:01.541917  585025 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 20:37:01.541928  585025 kubeadm.go:310] 
	I1205 20:37:01.542020  585025 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token l8cx4j.koqnwrdaqrc08irs \
	I1205 20:37:01.542138  585025 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 \
	I1205 20:37:01.542171  585025 kubeadm.go:310] 	--control-plane 
	I1205 20:37:01.542180  585025 kubeadm.go:310] 
	I1205 20:37:01.542264  585025 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:37:01.542283  585025 kubeadm.go:310] 
	I1205 20:37:01.542407  585025 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token l8cx4j.koqnwrdaqrc08irs \
	I1205 20:37:01.542513  585025 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 
	I1205 20:37:01.542530  585025 cni.go:84] Creating CNI manager for ""
	I1205 20:37:01.542538  585025 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:37:01.543967  585025 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:37:01.545652  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:37:01.557890  585025 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:37:01.577447  585025 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:37:01.577532  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-816185 minikube.k8s.io/updated_at=2024_12_05T20_37_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=no-preload-816185 minikube.k8s.io/primary=true
	I1205 20:37:01.577542  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:01.618121  585025 ops.go:34] apiserver oom_adj: -16
	I1205 20:37:01.806825  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:02.307212  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:02.807893  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:03.307202  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:03.806891  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:04.307571  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:04.807485  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:05.307695  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:05.387751  585025 kubeadm.go:1113] duration metric: took 3.810307917s to wait for elevateKubeSystemPrivileges
	I1205 20:37:05.387790  585025 kubeadm.go:394] duration metric: took 5m0.269375789s to StartCluster
	I1205 20:37:05.387810  585025 settings.go:142] acquiring lock: {Name:mk53b9e6d652790a330d8f10370186624dd74692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:37:05.387891  585025 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:37:05.389703  585025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:37:05.389984  585025 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.37 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:37:05.390056  585025 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 20:37:05.390179  585025 config.go:182] Loaded profile config "no-preload-816185": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:37:05.390193  585025 addons.go:69] Setting storage-provisioner=true in profile "no-preload-816185"
	I1205 20:37:05.390216  585025 addons.go:69] Setting default-storageclass=true in profile "no-preload-816185"
	I1205 20:37:05.390246  585025 addons.go:69] Setting metrics-server=true in profile "no-preload-816185"
	I1205 20:37:05.390281  585025 addons.go:234] Setting addon metrics-server=true in "no-preload-816185"
	W1205 20:37:05.390295  585025 addons.go:243] addon metrics-server should already be in state true
	I1205 20:37:05.390340  585025 host.go:66] Checking if "no-preload-816185" exists ...
	I1205 20:37:05.390255  585025 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-816185"
	I1205 20:37:05.390263  585025 addons.go:234] Setting addon storage-provisioner=true in "no-preload-816185"
	W1205 20:37:05.390463  585025 addons.go:243] addon storage-provisioner should already be in state true
	I1205 20:37:05.390533  585025 host.go:66] Checking if "no-preload-816185" exists ...
	I1205 20:37:05.390844  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.390888  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.390852  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.390947  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.390973  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.391032  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.391810  585025 out.go:177] * Verifying Kubernetes components...
	I1205 20:37:05.393274  585025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:37:05.408078  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40259
	I1205 20:37:05.408366  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I1205 20:37:05.408765  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.408780  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.409315  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.409337  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.409441  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.409465  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.409767  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.409800  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.409941  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetState
	I1205 20:37:05.410249  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42147
	I1205 20:37:05.410487  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.410537  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.410753  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.411387  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.411412  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.411847  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.412515  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.412565  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.413770  585025 addons.go:234] Setting addon default-storageclass=true in "no-preload-816185"
	W1205 20:37:05.413796  585025 addons.go:243] addon default-storageclass should already be in state true
	I1205 20:37:05.413828  585025 host.go:66] Checking if "no-preload-816185" exists ...
	I1205 20:37:05.414184  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.414231  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.430214  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33287
	I1205 20:37:05.430684  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.431260  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.431286  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.431697  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.431929  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetState
	I1205 20:37:05.432941  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36939
	I1205 20:37:05.433361  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.433835  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.433855  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.433933  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:37:05.434385  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.434596  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetState
	I1205 20:37:05.434638  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37163
	I1205 20:37:05.435193  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.435667  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.435694  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.435994  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.436000  585025 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:37:05.436635  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.436657  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:37:05.436683  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.437421  585025 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:37:05.437441  585025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:37:05.437461  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:37:05.438221  585025 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:37:05.439704  585025 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:37:05.439721  585025 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:37:05.439737  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:37:05.440522  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.441031  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:37:05.441058  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.441198  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:37:05.441352  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:37:05.441458  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:37:05.441582  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:37:05.445842  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.446223  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:37:05.446248  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.446449  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:37:05.446661  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:37:05.446806  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:37:05.446923  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:37:05.472870  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38029
	I1205 20:37:05.473520  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.474053  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.474080  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.474456  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.474666  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetState
	I1205 20:37:05.476603  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:37:05.476836  585025 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:37:05.476859  585025 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:37:05.476886  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:37:05.480063  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.480546  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:37:05.480580  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.480941  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:37:05.481175  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:37:05.481331  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:37:05.481425  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:37:05.607284  585025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:37:05.627090  585025 node_ready.go:35] waiting up to 6m0s for node "no-preload-816185" to be "Ready" ...
	I1205 20:37:05.637577  585025 node_ready.go:49] node "no-preload-816185" has status "Ready":"True"
	I1205 20:37:05.637602  585025 node_ready.go:38] duration metric: took 10.476209ms for node "no-preload-816185" to be "Ready" ...
	I1205 20:37:05.637611  585025 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:37:05.642969  585025 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:05.696662  585025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:37:05.725276  585025 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:37:05.725309  585025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:37:05.779102  585025 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:37:05.779137  585025 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:37:05.814495  585025 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:37:05.814531  585025 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:37:05.823828  585025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:37:05.863152  585025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:37:05.948854  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:05.948895  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:05.949242  585025 main.go:141] libmachine: (no-preload-816185) DBG | Closing plugin on server side
	I1205 20:37:05.949266  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:05.949275  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:05.949294  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:05.949302  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:05.949590  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:05.949601  585025 main.go:141] libmachine: (no-preload-816185) DBG | Closing plugin on server side
	I1205 20:37:05.949612  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:05.975655  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:05.975683  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:05.975962  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:05.975978  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:07.004027  585025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.180164032s)
	I1205 20:37:07.004103  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:07.004117  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:07.004498  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:07.004520  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:07.004535  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:07.004545  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:07.004802  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:07.004820  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:07.208032  585025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.344819218s)
	I1205 20:37:07.208143  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:07.208159  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:07.208537  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:07.208556  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:07.208566  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:07.208573  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:07.208846  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:07.208860  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:07.208871  585025 addons.go:475] Verifying addon metrics-server=true in "no-preload-816185"
	I1205 20:37:07.210487  585025 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:37:07.212093  585025 addons.go:510] duration metric: took 1.822047986s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:37:07.658678  585025 pod_ready.go:103] pod "etcd-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:37:08.156061  585025 pod_ready.go:93] pod "etcd-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:08.156094  585025 pod_ready.go:82] duration metric: took 2.513098547s for pod "etcd-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:08.156109  585025 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:10.162704  585025 pod_ready.go:103] pod "kube-apiserver-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:37:12.163550  585025 pod_ready.go:93] pod "kube-apiserver-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:12.163578  585025 pod_ready.go:82] duration metric: took 4.007461295s for pod "kube-apiserver-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:12.163601  585025 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:12.169123  585025 pod_ready.go:93] pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:12.169155  585025 pod_ready.go:82] duration metric: took 5.544964ms for pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:12.169170  585025 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:14.175288  585025 pod_ready.go:103] pod "kube-scheduler-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:37:14.676107  585025 pod_ready.go:93] pod "kube-scheduler-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:14.676137  585025 pod_ready.go:82] duration metric: took 2.506959209s for pod "kube-scheduler-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:14.676146  585025 pod_ready.go:39] duration metric: took 9.038525731s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:37:14.676165  585025 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:37:14.676222  585025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:37:14.692508  585025 api_server.go:72] duration metric: took 9.302489277s to wait for apiserver process to appear ...
	I1205 20:37:14.692540  585025 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:37:14.692562  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:37:14.697176  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 200:
	ok
	I1205 20:37:14.698320  585025 api_server.go:141] control plane version: v1.31.2
	I1205 20:37:14.698345  585025 api_server.go:131] duration metric: took 5.796971ms to wait for apiserver health ...
	I1205 20:37:14.698357  585025 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:37:14.706456  585025 system_pods.go:59] 9 kube-system pods found
	I1205 20:37:14.706503  585025 system_pods.go:61] "coredns-7c65d6cfc9-fmcnh" [fb6a91c8-af65-4fb6-af77-0a6c45d224a7] Running
	I1205 20:37:14.706512  585025 system_pods.go:61] "coredns-7c65d6cfc9-gmc2j" [2bfc0f96-5ad3-42c7-ab2c-4a29cbeab20f] Running
	I1205 20:37:14.706518  585025 system_pods.go:61] "etcd-no-preload-816185" [b647e785-c865-47d9-9215-4b92783df8f0] Running
	I1205 20:37:14.706524  585025 system_pods.go:61] "kube-apiserver-no-preload-816185" [a4d257bd-3d3b-4833-9edd-7a7f764d9482] Running
	I1205 20:37:14.706529  585025 system_pods.go:61] "kube-controller-manager-no-preload-816185" [0487e25d-77df-4ab1-81a0-18c09d1b7f60] Running
	I1205 20:37:14.706534  585025 system_pods.go:61] "kube-proxy-q8thq" [8be5b50a-e564-4d80-82c4-357db41a3c1e] Running
	I1205 20:37:14.706539  585025 system_pods.go:61] "kube-scheduler-no-preload-816185" [187898da-a8e3-4ce1-9f70-d581133bef49] Running
	I1205 20:37:14.706549  585025 system_pods.go:61] "metrics-server-6867b74b74-8vmd6" [d838e6e3-bd74-4653-9289-4f5375b03d4f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:37:14.706555  585025 system_pods.go:61] "storage-provisioner" [7f33e249-9330-428f-8feb-9f3cf44369be] Running
	I1205 20:37:14.706565  585025 system_pods.go:74] duration metric: took 8.200516ms to wait for pod list to return data ...
	I1205 20:37:14.706577  585025 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:37:14.716217  585025 default_sa.go:45] found service account: "default"
	I1205 20:37:14.716259  585025 default_sa.go:55] duration metric: took 9.664045ms for default service account to be created ...
	I1205 20:37:14.716293  585025 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:37:14.723293  585025 system_pods.go:86] 9 kube-system pods found
	I1205 20:37:14.723323  585025 system_pods.go:89] "coredns-7c65d6cfc9-fmcnh" [fb6a91c8-af65-4fb6-af77-0a6c45d224a7] Running
	I1205 20:37:14.723329  585025 system_pods.go:89] "coredns-7c65d6cfc9-gmc2j" [2bfc0f96-5ad3-42c7-ab2c-4a29cbeab20f] Running
	I1205 20:37:14.723333  585025 system_pods.go:89] "etcd-no-preload-816185" [b647e785-c865-47d9-9215-4b92783df8f0] Running
	I1205 20:37:14.723337  585025 system_pods.go:89] "kube-apiserver-no-preload-816185" [a4d257bd-3d3b-4833-9edd-7a7f764d9482] Running
	I1205 20:37:14.723342  585025 system_pods.go:89] "kube-controller-manager-no-preload-816185" [0487e25d-77df-4ab1-81a0-18c09d1b7f60] Running
	I1205 20:37:14.723346  585025 system_pods.go:89] "kube-proxy-q8thq" [8be5b50a-e564-4d80-82c4-357db41a3c1e] Running
	I1205 20:37:14.723349  585025 system_pods.go:89] "kube-scheduler-no-preload-816185" [187898da-a8e3-4ce1-9f70-d581133bef49] Running
	I1205 20:37:14.723355  585025 system_pods.go:89] "metrics-server-6867b74b74-8vmd6" [d838e6e3-bd74-4653-9289-4f5375b03d4f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:37:14.723360  585025 system_pods.go:89] "storage-provisioner" [7f33e249-9330-428f-8feb-9f3cf44369be] Running
	I1205 20:37:14.723368  585025 system_pods.go:126] duration metric: took 7.067824ms to wait for k8s-apps to be running ...
	I1205 20:37:14.723375  585025 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:37:14.723422  585025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:37:14.744142  585025 system_svc.go:56] duration metric: took 20.751867ms WaitForService to wait for kubelet
	I1205 20:37:14.744179  585025 kubeadm.go:582] duration metric: took 9.354165706s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:37:14.744200  585025 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:37:14.751985  585025 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:37:14.752026  585025 node_conditions.go:123] node cpu capacity is 2
	I1205 20:37:14.752043  585025 node_conditions.go:105] duration metric: took 7.836665ms to run NodePressure ...
	I1205 20:37:14.752069  585025 start.go:241] waiting for startup goroutines ...
	I1205 20:37:14.752081  585025 start.go:246] waiting for cluster config update ...
	I1205 20:37:14.752095  585025 start.go:255] writing updated cluster config ...
	I1205 20:37:14.752490  585025 ssh_runner.go:195] Run: rm -f paused
	I1205 20:37:14.806583  585025 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 20:37:14.808574  585025 out.go:177] * Done! kubectl is now configured to use "no-preload-816185" cluster and "default" namespace by default
	I1205 20:37:17.029681  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:37:17.029940  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:37:17.029963  585602 kubeadm.go:310] 
	I1205 20:37:17.030022  585602 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 20:37:17.030101  585602 kubeadm.go:310] 		timed out waiting for the condition
	I1205 20:37:17.030128  585602 kubeadm.go:310] 
	I1205 20:37:17.030167  585602 kubeadm.go:310] 	This error is likely caused by:
	I1205 20:37:17.030209  585602 kubeadm.go:310] 		- The kubelet is not running
	I1205 20:37:17.030353  585602 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 20:37:17.030369  585602 kubeadm.go:310] 
	I1205 20:37:17.030489  585602 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 20:37:17.030540  585602 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 20:37:17.030584  585602 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 20:37:17.030594  585602 kubeadm.go:310] 
	I1205 20:37:17.030733  585602 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 20:37:17.030843  585602 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 20:37:17.030855  585602 kubeadm.go:310] 
	I1205 20:37:17.031025  585602 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 20:37:17.031154  585602 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 20:37:17.031268  585602 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 20:37:17.031374  585602 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 20:37:17.031386  585602 kubeadm.go:310] 
	I1205 20:37:17.032368  585602 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:37:17.032493  585602 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 20:37:17.032562  585602 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1205 20:37:17.032709  585602 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 20:37:17.032762  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:37:17.518572  585602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:37:17.533868  585602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:37:17.547199  585602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:37:17.547224  585602 kubeadm.go:157] found existing configuration files:
	
	I1205 20:37:17.547272  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:37:17.556733  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:37:17.556801  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:37:17.566622  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:37:17.577044  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:37:17.577121  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:37:17.588726  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:37:17.599269  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:37:17.599346  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:37:17.609243  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:37:17.618947  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:37:17.619034  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:37:17.629228  585602 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:37:17.878785  585602 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:39:13.972213  585602 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 20:39:13.972379  585602 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1205 20:39:13.973936  585602 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 20:39:13.974035  585602 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:39:13.974150  585602 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:39:13.974251  585602 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:39:13.974341  585602 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:39:13.974404  585602 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:39:13.976164  585602 out.go:235]   - Generating certificates and keys ...
	I1205 20:39:13.976248  585602 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:39:13.976339  585602 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:39:13.976449  585602 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:39:13.976538  585602 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:39:13.976642  585602 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:39:13.976736  585602 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 20:39:13.976832  585602 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:39:13.976924  585602 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:39:13.977025  585602 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:39:13.977131  585602 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:39:13.977189  585602 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 20:39:13.977272  585602 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:39:13.977389  585602 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:39:13.977474  585602 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:39:13.977566  585602 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:39:13.977650  585602 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:39:13.977776  585602 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:39:13.977901  585602 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:39:13.977976  585602 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:39:13.978137  585602 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:39:13.979473  585602 out.go:235]   - Booting up control plane ...
	I1205 20:39:13.979581  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:39:13.979664  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:39:13.979732  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:39:13.979803  585602 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:39:13.979952  585602 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:39:13.980017  585602 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 20:39:13.980107  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.980396  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.980511  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.980744  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.980843  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.981116  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.981227  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.981439  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.981528  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.981718  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.981731  585602 kubeadm.go:310] 
	I1205 20:39:13.981773  585602 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 20:39:13.981831  585602 kubeadm.go:310] 		timed out waiting for the condition
	I1205 20:39:13.981839  585602 kubeadm.go:310] 
	I1205 20:39:13.981888  585602 kubeadm.go:310] 	This error is likely caused by:
	I1205 20:39:13.981941  585602 kubeadm.go:310] 		- The kubelet is not running
	I1205 20:39:13.982052  585602 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 20:39:13.982059  585602 kubeadm.go:310] 
	I1205 20:39:13.982144  585602 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 20:39:13.982174  585602 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 20:39:13.982208  585602 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 20:39:13.982215  585602 kubeadm.go:310] 
	I1205 20:39:13.982302  585602 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 20:39:13.982415  585602 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 20:39:13.982431  585602 kubeadm.go:310] 
	I1205 20:39:13.982540  585602 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 20:39:13.982618  585602 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 20:39:13.982701  585602 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 20:39:13.982766  585602 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 20:39:13.982839  585602 kubeadm.go:310] 
	I1205 20:39:13.982855  585602 kubeadm.go:394] duration metric: took 7m58.414377536s to StartCluster
	I1205 20:39:13.982907  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:39:13.982975  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:39:14.031730  585602 cri.go:89] found id: ""
	I1205 20:39:14.031767  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.031779  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:39:14.031791  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:39:14.031865  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:39:14.068372  585602 cri.go:89] found id: ""
	I1205 20:39:14.068420  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.068433  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:39:14.068440  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:39:14.068512  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:39:14.106807  585602 cri.go:89] found id: ""
	I1205 20:39:14.106837  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.106847  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:39:14.106856  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:39:14.106930  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:39:14.144926  585602 cri.go:89] found id: ""
	I1205 20:39:14.144952  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.144960  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:39:14.144974  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:39:14.145052  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:39:14.182712  585602 cri.go:89] found id: ""
	I1205 20:39:14.182742  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.182754  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:39:14.182762  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:39:14.182826  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:39:14.220469  585602 cri.go:89] found id: ""
	I1205 20:39:14.220505  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.220519  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:39:14.220527  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:39:14.220593  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:39:14.269791  585602 cri.go:89] found id: ""
	I1205 20:39:14.269823  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.269835  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:39:14.269842  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:39:14.269911  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:39:14.313406  585602 cri.go:89] found id: ""
	I1205 20:39:14.313439  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.313450  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:39:14.313464  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:39:14.313483  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:39:14.330488  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:39:14.330526  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:39:14.417358  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:39:14.417403  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:39:14.417421  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:39:14.530226  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:39:14.530270  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:39:14.585471  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:39:14.585512  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 20:39:14.636389  585602 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1205 20:39:14.636456  585602 out.go:270] * 
	W1205 20:39:14.636535  585602 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 20:39:14.636549  585602 out.go:270] * 
	W1205 20:39:14.637475  585602 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 20:39:14.640654  585602 out.go:201] 
	W1205 20:39:14.641873  585602 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 20:39:14.641931  585602 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 20:39:14.641975  585602 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 20:39:14.643389  585602 out.go:201] 
	
	
	==> CRI-O <==
	Dec 05 20:48:20 old-k8s-version-386085 crio[629]: time="2024-12-05 20:48:20.000652342Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431700000626277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7aa5b930-ee48-45c5-ac6a-41dfb0adf012 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:48:20 old-k8s-version-386085 crio[629]: time="2024-12-05 20:48:20.001503948Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=480208c0-2c1a-4162-bdf7-64fd1864ba1f name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:48:20 old-k8s-version-386085 crio[629]: time="2024-12-05 20:48:20.001550923Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=480208c0-2c1a-4162-bdf7-64fd1864ba1f name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:48:20 old-k8s-version-386085 crio[629]: time="2024-12-05 20:48:20.001592152Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=480208c0-2c1a-4162-bdf7-64fd1864ba1f name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:48:20 old-k8s-version-386085 crio[629]: time="2024-12-05 20:48:20.039109172Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=07f9bb93-a28c-4c95-bffd-06632710d7d8 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:48:20 old-k8s-version-386085 crio[629]: time="2024-12-05 20:48:20.039225847Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=07f9bb93-a28c-4c95-bffd-06632710d7d8 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:48:20 old-k8s-version-386085 crio[629]: time="2024-12-05 20:48:20.041212409Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2437ecad-6788-4c1f-8505-acd77f9afabf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:48:20 old-k8s-version-386085 crio[629]: time="2024-12-05 20:48:20.041634379Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431700041603432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2437ecad-6788-4c1f-8505-acd77f9afabf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:48:20 old-k8s-version-386085 crio[629]: time="2024-12-05 20:48:20.042476133Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a0cee38-58c0-49a0-9bad-88b4f08ee7a5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:48:20 old-k8s-version-386085 crio[629]: time="2024-12-05 20:48:20.042558663Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a0cee38-58c0-49a0-9bad-88b4f08ee7a5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:48:20 old-k8s-version-386085 crio[629]: time="2024-12-05 20:48:20.042600102Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4a0cee38-58c0-49a0-9bad-88b4f08ee7a5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:48:20 old-k8s-version-386085 crio[629]: time="2024-12-05 20:48:20.080105776Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=30154e22-356c-4cec-82c2-edd8747fed4e name=/runtime.v1.RuntimeService/Version
	Dec 05 20:48:20 old-k8s-version-386085 crio[629]: time="2024-12-05 20:48:20.080189421Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=30154e22-356c-4cec-82c2-edd8747fed4e name=/runtime.v1.RuntimeService/Version
	Dec 05 20:48:20 old-k8s-version-386085 crio[629]: time="2024-12-05 20:48:20.081617301Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=982de4fb-6da2-46c6-9c4e-ac81455a9e92 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:48:20 old-k8s-version-386085 crio[629]: time="2024-12-05 20:48:20.082206941Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431700082176220,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=982de4fb-6da2-46c6-9c4e-ac81455a9e92 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:48:20 old-k8s-version-386085 crio[629]: time="2024-12-05 20:48:20.082897047Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a1143732-8ed8-4246-beca-3b5bb5a6f080 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:48:20 old-k8s-version-386085 crio[629]: time="2024-12-05 20:48:20.083026995Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a1143732-8ed8-4246-beca-3b5bb5a6f080 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:48:20 old-k8s-version-386085 crio[629]: time="2024-12-05 20:48:20.083096763Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a1143732-8ed8-4246-beca-3b5bb5a6f080 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:48:20 old-k8s-version-386085 crio[629]: time="2024-12-05 20:48:20.118424136Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=da49b6f1-fb11-4027-b7c8-bd9f464d385d name=/runtime.v1.RuntimeService/Version
	Dec 05 20:48:20 old-k8s-version-386085 crio[629]: time="2024-12-05 20:48:20.118503539Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=da49b6f1-fb11-4027-b7c8-bd9f464d385d name=/runtime.v1.RuntimeService/Version
	Dec 05 20:48:20 old-k8s-version-386085 crio[629]: time="2024-12-05 20:48:20.119481693Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dc52c3d7-0861-4a51-8211-3fc5c1de7645 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:48:20 old-k8s-version-386085 crio[629]: time="2024-12-05 20:48:20.119860389Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431700119839760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc52c3d7-0861-4a51-8211-3fc5c1de7645 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:48:20 old-k8s-version-386085 crio[629]: time="2024-12-05 20:48:20.120426420Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7df3bc2e-0187-4b7b-8353-4bd4fba7ffbe name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:48:20 old-k8s-version-386085 crio[629]: time="2024-12-05 20:48:20.120475069Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7df3bc2e-0187-4b7b-8353-4bd4fba7ffbe name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:48:20 old-k8s-version-386085 crio[629]: time="2024-12-05 20:48:20.120510076Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7df3bc2e-0187-4b7b-8353-4bd4fba7ffbe name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 20:30] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053859] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.048232] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.156020] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.849389] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.680157] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec 5 20:31] systemd-fstab-generator[557]: Ignoring "noauto" option for root device
	[  +0.058081] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059601] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.177616] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.149980] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.257256] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +6.927159] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.062736] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.953352] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[  +9.534888] kauditd_printk_skb: 46 callbacks suppressed
	[Dec 5 20:35] systemd-fstab-generator[5061]: Ignoring "noauto" option for root device
	[Dec 5 20:37] systemd-fstab-generator[5344]: Ignoring "noauto" option for root device
	[  +0.073876] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:48:20 up 17 min,  0 users,  load average: 0.00, 0.02, 0.04
	Linux old-k8s-version-386085 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 05 20:48:16 old-k8s-version-386085 kubelet[6508]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc0009dbd40)
	Dec 05 20:48:16 old-k8s-version-386085 kubelet[6508]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Dec 05 20:48:16 old-k8s-version-386085 kubelet[6508]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Dec 05 20:48:16 old-k8s-version-386085 kubelet[6508]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Dec 05 20:48:16 old-k8s-version-386085 kubelet[6508]: goroutine 154 [select]:
	Dec 05 20:48:16 old-k8s-version-386085 kubelet[6508]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007f3ef0, 0x4f0ac20, 0xc0009f0f00, 0x1, 0xc00009e0c0)
	Dec 05 20:48:16 old-k8s-version-386085 kubelet[6508]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Dec 05 20:48:16 old-k8s-version-386085 kubelet[6508]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000255180, 0xc00009e0c0)
	Dec 05 20:48:16 old-k8s-version-386085 kubelet[6508]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Dec 05 20:48:16 old-k8s-version-386085 kubelet[6508]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Dec 05 20:48:16 old-k8s-version-386085 kubelet[6508]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Dec 05 20:48:16 old-k8s-version-386085 kubelet[6508]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000b78310, 0xc000b7e0a0)
	Dec 05 20:48:16 old-k8s-version-386085 kubelet[6508]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Dec 05 20:48:16 old-k8s-version-386085 kubelet[6508]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Dec 05 20:48:16 old-k8s-version-386085 kubelet[6508]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Dec 05 20:48:16 old-k8s-version-386085 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Dec 05 20:48:16 old-k8s-version-386085 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 20:48:16 old-k8s-version-386085 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Dec 05 20:48:16 old-k8s-version-386085 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 05 20:48:16 old-k8s-version-386085 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 05 20:48:16 old-k8s-version-386085 kubelet[6517]: I1205 20:48:16.760303    6517 server.go:416] Version: v1.20.0
	Dec 05 20:48:16 old-k8s-version-386085 kubelet[6517]: I1205 20:48:16.760651    6517 server.go:837] Client rotation is on, will bootstrap in background
	Dec 05 20:48:16 old-k8s-version-386085 kubelet[6517]: I1205 20:48:16.762544    6517 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Dec 05 20:48:16 old-k8s-version-386085 kubelet[6517]: W1205 20:48:16.763505    6517 manager.go:159] Cannot detect current cgroup on cgroup v2
	Dec 05 20:48:16 old-k8s-version-386085 kubelet[6517]: I1205 20:48:16.763811    6517 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-386085 -n old-k8s-version-386085
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-386085 -n old-k8s-version-386085: exit status 2 (253.231973ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-386085" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (397.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-789000 -n embed-certs-789000
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-12-05 20:51:38.744154438 +0000 UTC m=+6590.949774770
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-789000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-789000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.022µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-789000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-789000 -n embed-certs-789000
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-789000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-789000 logs -n 25: (1.441156151s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p default-k8s-diff-port-942599  | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC | 05 Dec 24 20:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC |                     |
	|         | default-k8s-diff-port-942599                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-816185                  | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-789000                 | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-816185                                   | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC | 05 Dec 24 20:37 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-789000                                  | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC | 05 Dec 24 20:35 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-386085                              | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-386085             | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-386085                              | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-942599       | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:28 UTC | 05 Dec 24 20:36 UTC |
	|         | default-k8s-diff-port-942599                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-386085                              | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:49 UTC | 05 Dec 24 20:49 UTC |
	| start   | -p newest-cni-024411 --memory=2200 --alsologtostderr   | newest-cni-024411            | jenkins | v1.34.0 | 05 Dec 24 20:49 UTC | 05 Dec 24 20:50 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-024411             | newest-cni-024411            | jenkins | v1.34.0 | 05 Dec 24 20:50 UTC | 05 Dec 24 20:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-024411                                   | newest-cni-024411            | jenkins | v1.34.0 | 05 Dec 24 20:50 UTC | 05 Dec 24 20:50 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-024411                  | newest-cni-024411            | jenkins | v1.34.0 | 05 Dec 24 20:50 UTC | 05 Dec 24 20:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-024411 --memory=2200 --alsologtostderr   | newest-cni-024411            | jenkins | v1.34.0 | 05 Dec 24 20:50 UTC | 05 Dec 24 20:51 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-816185                                   | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:51 UTC | 05 Dec 24 20:51 UTC |
	| start   | -p auto-383287 --memory=3072                           | auto-383287                  | jenkins | v1.34.0 | 05 Dec 24 20:51 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| image   | newest-cni-024411 image list                           | newest-cni-024411            | jenkins | v1.34.0 | 05 Dec 24 20:51 UTC | 05 Dec 24 20:51 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-024411                                   | newest-cni-024411            | jenkins | v1.34.0 | 05 Dec 24 20:51 UTC | 05 Dec 24 20:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-024411                                   | newest-cni-024411            | jenkins | v1.34.0 | 05 Dec 24 20:51 UTC | 05 Dec 24 20:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-024411                                   | newest-cni-024411            | jenkins | v1.34.0 | 05 Dec 24 20:51 UTC | 05 Dec 24 20:51 UTC |
	| delete  | -p newest-cni-024411                                   | newest-cni-024411            | jenkins | v1.34.0 | 05 Dec 24 20:51 UTC | 05 Dec 24 20:51 UTC |
	| start   | -p enable-default-cni-383287                           | enable-default-cni-383287    | jenkins | v1.34.0 | 05 Dec 24 20:51 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --enable-default-cni=true                              |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 20:51:35
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:51:35.219181  593431 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:51:35.219313  593431 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:51:35.219329  593431 out.go:358] Setting ErrFile to fd 2...
	I1205 20:51:35.219336  593431 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:51:35.219519  593431 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 20:51:35.220103  593431 out.go:352] Setting JSON to false
	I1205 20:51:35.221176  593431 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":12841,"bootTime":1733419054,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:51:35.221295  593431 start.go:139] virtualization: kvm guest
	I1205 20:51:35.223481  593431 out.go:177] * [enable-default-cni-383287] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:51:35.224970  593431 notify.go:220] Checking for updates...
	I1205 20:51:35.224981  593431 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 20:51:35.226511  593431 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:51:35.227987  593431 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:51:35.229354  593431 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 20:51:35.230648  593431 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:51:35.231892  593431 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:51:35.233453  593431 config.go:182] Loaded profile config "auto-383287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:51:35.233547  593431 config.go:182] Loaded profile config "default-k8s-diff-port-942599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:51:35.233626  593431 config.go:182] Loaded profile config "embed-certs-789000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:51:35.233722  593431 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:51:35.271823  593431 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 20:51:35.273054  593431 start.go:297] selected driver: kvm2
	I1205 20:51:35.273072  593431 start.go:901] validating driver "kvm2" against <nil>
	I1205 20:51:35.273084  593431 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:51:35.273862  593431 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:51:35.273951  593431 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20052-530897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:51:35.289772  593431 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 20:51:35.289843  593431 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E1205 20:51:35.290102  593431 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1205 20:51:35.290127  593431 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:51:35.290162  593431 cni.go:84] Creating CNI manager for "bridge"
	I1205 20:51:35.290184  593431 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 20:51:35.290254  593431 start.go:340] cluster config:
	{Name:enable-default-cni-383287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-383287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:51:35.290384  593431 iso.go:125] acquiring lock: {Name:mk778929df466edaca8cb6d38427acedfae32b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:51:35.292279  593431 out.go:177] * Starting "enable-default-cni-383287" primary control-plane node in "enable-default-cni-383287" cluster
	I1205 20:51:33.485330  592666 main.go:141] libmachine: (auto-383287) DBG | domain auto-383287 has defined MAC address 52:54:00:c2:17:e6 in network mk-auto-383287
	I1205 20:51:33.485858  592666 main.go:141] libmachine: (auto-383287) DBG | unable to find current IP address of domain auto-383287 in network mk-auto-383287
	I1205 20:51:33.485891  592666 main.go:141] libmachine: (auto-383287) DBG | I1205 20:51:33.485825  592707 retry.go:31] will retry after 3.727049032s: waiting for machine to come up
	I1205 20:51:35.293600  593431 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:51:35.293633  593431 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 20:51:35.293644  593431 cache.go:56] Caching tarball of preloaded images
	I1205 20:51:35.293740  593431 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:51:35.293751  593431 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 20:51:35.293831  593431 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/enable-default-cni-383287/config.json ...
	I1205 20:51:35.293848  593431 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/enable-default-cni-383287/config.json: {Name:mkc052c3be61ac017d2689550d336df2498cdfb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:51:35.293978  593431 start.go:360] acquireMachinesLock for enable-default-cni-383287: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:51:38.842117  593431 start.go:364] duration metric: took 3.54811632s to acquireMachinesLock for "enable-default-cni-383287"
	I1205 20:51:38.842181  593431 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-383287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-383287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:51:38.842312  593431 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	Dec 05 20:51:39 embed-certs-789000 crio[718]: time="2024-12-05 20:51:39.475850808Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431899475829137,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0632280b-4ec7-4dec-8046-ba0a74cff436 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:51:39 embed-certs-789000 crio[718]: time="2024-12-05 20:51:39.476538298Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc49e129-a6cf-4ea7-bfee-9f997fb05724 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:51:39 embed-certs-789000 crio[718]: time="2024-12-05 20:51:39.476610404Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc49e129-a6cf-4ea7-bfee-9f997fb05724 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:51:39 embed-certs-789000 crio[718]: time="2024-12-05 20:51:39.476806474Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3590f9508a3b09c552a77ad99852b72a135a2ec395476bf71cac9cba129609b,PodSandboxId:667ddfeba1da3a7fe58f2d2a2b29adf71c9ee53253c97c12c0005a1e9578c25e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430949452646939,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2808c8da-8904-45a0-ae68-bfd68681540f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b462b77c174cac4d84d74fc00ce85e02aaef3f18d3808b44546bf941ac0cb1c,PodSandboxId:a26a5b0e3d29568b7bd6f0f497008d92d2a12fa6ab1f1ce3b8dbc9cc5136cff2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430949030934095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rh6pj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bdd8a47-abec-4dc4-a1ed-4a9a124417a3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0b5402930c1ec85b6c1e1b26d2c9ae3690ece4afc292821db305a7153157e1,PodSandboxId:a3bd07f6e1d967b7e5db80edef61810dce74c7d2527a4a8317756c3408391e50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430948836480376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6mp2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
1aaefd9-c549-4065-b3dd-a0e4d925e592,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48332dd170fb8de47499e97df295d87bd84b9b1168d8de60ad34389930087b21,PodSandboxId:00e986108a7e8293ce3923847989281cc8c71c7385847293b744a5058aa9f6ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733430947399651285,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-znjpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3df1a22-d7e0-4a83-84dd-0e710185ded6,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c198cf9812acaa3935b068fb6be235089141a68ab9f7163d841c3efd8f50de,PodSandboxId:54120b1ea76b62e9483d05b8886c083497d48bc5ed72f841331de381baf99b68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430937122232254
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af9a21bfb03bc31f2f91411d7d8bd82,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb5ec3a2c4a30d4f224867a4150377f1dbee0b64a84ec5b60995cbad230dbd0,PodSandboxId:2d504d8e3573c07d52511c9893b2a60eb84d81276fb9d93a890b85d5d772c271,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430937117
836470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d9c4239ce8abe6c3eb5781fffc7f358,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab61f60cbe2c60f64877fe93a63f225ae437aa73fa09c4e1c45066805ed0c55,PodSandboxId:dfeb72c01827e0b29ce2360805da81526b7e12237ef9be36d577b1d74e30bae5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430937097155965,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f68d1f88de87c2553b2b0d9b84e5dd72,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:217f0ccb4526a1479f5a4dab73685aef7aba41064c97a4b142e0b617c510b39c,PodSandboxId:75d0129543712bbcc085162b673b18e317596cd72591b938f561e8633fb7feb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430937001245655,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d150fe239f3ab0d40ea6589f44553acb,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c6cf0dd68ac2fdf5f39b36f0c8463645f13569af5dfd13d8db86ce45446171a,PodSandboxId:38d7aa2c1d75ef87b678906920ead810edfd47f8bd957bf7d0a1d4073314f23d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733430652292743904,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d150fe239f3ab0d40ea6589f44553acb,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bc49e129-a6cf-4ea7-bfee-9f997fb05724 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:51:39 embed-certs-789000 crio[718]: time="2024-12-05 20:51:39.519495658Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d7c5cc29-2755-4c82-8b4e-0de823416286 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:51:39 embed-certs-789000 crio[718]: time="2024-12-05 20:51:39.519619466Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d7c5cc29-2755-4c82-8b4e-0de823416286 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:51:39 embed-certs-789000 crio[718]: time="2024-12-05 20:51:39.521760450Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fa2fa570-c976-4e45-8fe8-4d0eceaa1293 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:51:39 embed-certs-789000 crio[718]: time="2024-12-05 20:51:39.522457534Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431899522311884,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fa2fa570-c976-4e45-8fe8-4d0eceaa1293 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:51:39 embed-certs-789000 crio[718]: time="2024-12-05 20:51:39.523588245Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae2b3a6e-6153-4e6f-b611-704a1a700dbe name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:51:39 embed-certs-789000 crio[718]: time="2024-12-05 20:51:39.523657040Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae2b3a6e-6153-4e6f-b611-704a1a700dbe name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:51:39 embed-certs-789000 crio[718]: time="2024-12-05 20:51:39.524024321Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3590f9508a3b09c552a77ad99852b72a135a2ec395476bf71cac9cba129609b,PodSandboxId:667ddfeba1da3a7fe58f2d2a2b29adf71c9ee53253c97c12c0005a1e9578c25e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430949452646939,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2808c8da-8904-45a0-ae68-bfd68681540f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b462b77c174cac4d84d74fc00ce85e02aaef3f18d3808b44546bf941ac0cb1c,PodSandboxId:a26a5b0e3d29568b7bd6f0f497008d92d2a12fa6ab1f1ce3b8dbc9cc5136cff2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430949030934095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rh6pj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bdd8a47-abec-4dc4-a1ed-4a9a124417a3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0b5402930c1ec85b6c1e1b26d2c9ae3690ece4afc292821db305a7153157e1,PodSandboxId:a3bd07f6e1d967b7e5db80edef61810dce74c7d2527a4a8317756c3408391e50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430948836480376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6mp2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
1aaefd9-c549-4065-b3dd-a0e4d925e592,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48332dd170fb8de47499e97df295d87bd84b9b1168d8de60ad34389930087b21,PodSandboxId:00e986108a7e8293ce3923847989281cc8c71c7385847293b744a5058aa9f6ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733430947399651285,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-znjpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3df1a22-d7e0-4a83-84dd-0e710185ded6,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c198cf9812acaa3935b068fb6be235089141a68ab9f7163d841c3efd8f50de,PodSandboxId:54120b1ea76b62e9483d05b8886c083497d48bc5ed72f841331de381baf99b68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430937122232254
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af9a21bfb03bc31f2f91411d7d8bd82,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb5ec3a2c4a30d4f224867a4150377f1dbee0b64a84ec5b60995cbad230dbd0,PodSandboxId:2d504d8e3573c07d52511c9893b2a60eb84d81276fb9d93a890b85d5d772c271,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430937117
836470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d9c4239ce8abe6c3eb5781fffc7f358,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab61f60cbe2c60f64877fe93a63f225ae437aa73fa09c4e1c45066805ed0c55,PodSandboxId:dfeb72c01827e0b29ce2360805da81526b7e12237ef9be36d577b1d74e30bae5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430937097155965,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f68d1f88de87c2553b2b0d9b84e5dd72,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:217f0ccb4526a1479f5a4dab73685aef7aba41064c97a4b142e0b617c510b39c,PodSandboxId:75d0129543712bbcc085162b673b18e317596cd72591b938f561e8633fb7feb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430937001245655,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d150fe239f3ab0d40ea6589f44553acb,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c6cf0dd68ac2fdf5f39b36f0c8463645f13569af5dfd13d8db86ce45446171a,PodSandboxId:38d7aa2c1d75ef87b678906920ead810edfd47f8bd957bf7d0a1d4073314f23d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733430652292743904,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d150fe239f3ab0d40ea6589f44553acb,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae2b3a6e-6153-4e6f-b611-704a1a700dbe name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:51:39 embed-certs-789000 crio[718]: time="2024-12-05 20:51:39.576206061Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=25b68d6e-3c0b-4b45-a764-178c609e6be0 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:51:39 embed-certs-789000 crio[718]: time="2024-12-05 20:51:39.576276373Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=25b68d6e-3c0b-4b45-a764-178c609e6be0 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:51:39 embed-certs-789000 crio[718]: time="2024-12-05 20:51:39.577606348Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=57458783-bf9e-405f-a3a4-9215d5bc4bd3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:51:39 embed-certs-789000 crio[718]: time="2024-12-05 20:51:39.577978950Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431899577957903,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=57458783-bf9e-405f-a3a4-9215d5bc4bd3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:51:39 embed-certs-789000 crio[718]: time="2024-12-05 20:51:39.578899540Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df809e31-3471-4569-8b08-7fa189af526b name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:51:39 embed-certs-789000 crio[718]: time="2024-12-05 20:51:39.578950319Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df809e31-3471-4569-8b08-7fa189af526b name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:51:39 embed-certs-789000 crio[718]: time="2024-12-05 20:51:39.579276775Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3590f9508a3b09c552a77ad99852b72a135a2ec395476bf71cac9cba129609b,PodSandboxId:667ddfeba1da3a7fe58f2d2a2b29adf71c9ee53253c97c12c0005a1e9578c25e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430949452646939,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2808c8da-8904-45a0-ae68-bfd68681540f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b462b77c174cac4d84d74fc00ce85e02aaef3f18d3808b44546bf941ac0cb1c,PodSandboxId:a26a5b0e3d29568b7bd6f0f497008d92d2a12fa6ab1f1ce3b8dbc9cc5136cff2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430949030934095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rh6pj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bdd8a47-abec-4dc4-a1ed-4a9a124417a3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0b5402930c1ec85b6c1e1b26d2c9ae3690ece4afc292821db305a7153157e1,PodSandboxId:a3bd07f6e1d967b7e5db80edef61810dce74c7d2527a4a8317756c3408391e50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430948836480376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6mp2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
1aaefd9-c549-4065-b3dd-a0e4d925e592,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48332dd170fb8de47499e97df295d87bd84b9b1168d8de60ad34389930087b21,PodSandboxId:00e986108a7e8293ce3923847989281cc8c71c7385847293b744a5058aa9f6ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733430947399651285,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-znjpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3df1a22-d7e0-4a83-84dd-0e710185ded6,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c198cf9812acaa3935b068fb6be235089141a68ab9f7163d841c3efd8f50de,PodSandboxId:54120b1ea76b62e9483d05b8886c083497d48bc5ed72f841331de381baf99b68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430937122232254
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af9a21bfb03bc31f2f91411d7d8bd82,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb5ec3a2c4a30d4f224867a4150377f1dbee0b64a84ec5b60995cbad230dbd0,PodSandboxId:2d504d8e3573c07d52511c9893b2a60eb84d81276fb9d93a890b85d5d772c271,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430937117
836470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d9c4239ce8abe6c3eb5781fffc7f358,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab61f60cbe2c60f64877fe93a63f225ae437aa73fa09c4e1c45066805ed0c55,PodSandboxId:dfeb72c01827e0b29ce2360805da81526b7e12237ef9be36d577b1d74e30bae5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430937097155965,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f68d1f88de87c2553b2b0d9b84e5dd72,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:217f0ccb4526a1479f5a4dab73685aef7aba41064c97a4b142e0b617c510b39c,PodSandboxId:75d0129543712bbcc085162b673b18e317596cd72591b938f561e8633fb7feb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430937001245655,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d150fe239f3ab0d40ea6589f44553acb,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c6cf0dd68ac2fdf5f39b36f0c8463645f13569af5dfd13d8db86ce45446171a,PodSandboxId:38d7aa2c1d75ef87b678906920ead810edfd47f8bd957bf7d0a1d4073314f23d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733430652292743904,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d150fe239f3ab0d40ea6589f44553acb,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df809e31-3471-4569-8b08-7fa189af526b name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:51:39 embed-certs-789000 crio[718]: time="2024-12-05 20:51:39.620544305Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ead89f67-f363-46b1-bfeb-c11b31e9c75c name=/runtime.v1.RuntimeService/Version
	Dec 05 20:51:39 embed-certs-789000 crio[718]: time="2024-12-05 20:51:39.620632968Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ead89f67-f363-46b1-bfeb-c11b31e9c75c name=/runtime.v1.RuntimeService/Version
	Dec 05 20:51:39 embed-certs-789000 crio[718]: time="2024-12-05 20:51:39.622903447Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e266dc54-8530-47ba-bed8-96a43fd01eca name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:51:39 embed-certs-789000 crio[718]: time="2024-12-05 20:51:39.623556131Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431899623522496,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e266dc54-8530-47ba-bed8-96a43fd01eca name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:51:39 embed-certs-789000 crio[718]: time="2024-12-05 20:51:39.624348189Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f8c00b9e-4124-459e-838c-b943e39b32a5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:51:39 embed-certs-789000 crio[718]: time="2024-12-05 20:51:39.624500957Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f8c00b9e-4124-459e-838c-b943e39b32a5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:51:39 embed-certs-789000 crio[718]: time="2024-12-05 20:51:39.624762202Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3590f9508a3b09c552a77ad99852b72a135a2ec395476bf71cac9cba129609b,PodSandboxId:667ddfeba1da3a7fe58f2d2a2b29adf71c9ee53253c97c12c0005a1e9578c25e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430949452646939,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2808c8da-8904-45a0-ae68-bfd68681540f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b462b77c174cac4d84d74fc00ce85e02aaef3f18d3808b44546bf941ac0cb1c,PodSandboxId:a26a5b0e3d29568b7bd6f0f497008d92d2a12fa6ab1f1ce3b8dbc9cc5136cff2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430949030934095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rh6pj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bdd8a47-abec-4dc4-a1ed-4a9a124417a3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0b5402930c1ec85b6c1e1b26d2c9ae3690ece4afc292821db305a7153157e1,PodSandboxId:a3bd07f6e1d967b7e5db80edef61810dce74c7d2527a4a8317756c3408391e50,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430948836480376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6mp2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
1aaefd9-c549-4065-b3dd-a0e4d925e592,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48332dd170fb8de47499e97df295d87bd84b9b1168d8de60ad34389930087b21,PodSandboxId:00e986108a7e8293ce3923847989281cc8c71c7385847293b744a5058aa9f6ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733430947399651285,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-znjpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3df1a22-d7e0-4a83-84dd-0e710185ded6,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c198cf9812acaa3935b068fb6be235089141a68ab9f7163d841c3efd8f50de,PodSandboxId:54120b1ea76b62e9483d05b8886c083497d48bc5ed72f841331de381baf99b68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430937122232254
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af9a21bfb03bc31f2f91411d7d8bd82,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb5ec3a2c4a30d4f224867a4150377f1dbee0b64a84ec5b60995cbad230dbd0,PodSandboxId:2d504d8e3573c07d52511c9893b2a60eb84d81276fb9d93a890b85d5d772c271,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430937117
836470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d9c4239ce8abe6c3eb5781fffc7f358,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab61f60cbe2c60f64877fe93a63f225ae437aa73fa09c4e1c45066805ed0c55,PodSandboxId:dfeb72c01827e0b29ce2360805da81526b7e12237ef9be36d577b1d74e30bae5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430937097155965,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f68d1f88de87c2553b2b0d9b84e5dd72,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:217f0ccb4526a1479f5a4dab73685aef7aba41064c97a4b142e0b617c510b39c,PodSandboxId:75d0129543712bbcc085162b673b18e317596cd72591b938f561e8633fb7feb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430937001245655,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d150fe239f3ab0d40ea6589f44553acb,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c6cf0dd68ac2fdf5f39b36f0c8463645f13569af5dfd13d8db86ce45446171a,PodSandboxId:38d7aa2c1d75ef87b678906920ead810edfd47f8bd957bf7d0a1d4073314f23d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733430652292743904,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-789000,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d150fe239f3ab0d40ea6589f44553acb,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f8c00b9e-4124-459e-838c-b943e39b32a5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b3590f9508a3b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   667ddfeba1da3       storage-provisioner
	0b462b77c174c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago      Running             coredns                   0                   a26a5b0e3d295       coredns-7c65d6cfc9-rh6pj
	be0b5402930c1       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago      Running             coredns                   0                   a3bd07f6e1d96       coredns-7c65d6cfc9-6mp2h
	48332dd170fb8       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   15 minutes ago      Running             kube-proxy                0                   00e986108a7e8       kube-proxy-znjpk
	f8c198cf9812a       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   16 minutes ago      Running             kube-controller-manager   2                   54120b1ea76b6       kube-controller-manager-embed-certs-789000
	6eb5ec3a2c4a3       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   16 minutes ago      Running             kube-scheduler            2                   2d504d8e3573c       kube-scheduler-embed-certs-789000
	2ab61f60cbe2c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   dfeb72c01827e       etcd-embed-certs-789000
	217f0ccb4526a       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   16 minutes ago      Running             kube-apiserver            2                   75d0129543712       kube-apiserver-embed-certs-789000
	3c6cf0dd68ac2       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   20 minutes ago      Exited              kube-apiserver            1                   38d7aa2c1d75e       kube-apiserver-embed-certs-789000
	
	
	==> coredns [0b462b77c174cac4d84d74fc00ce85e02aaef3f18d3808b44546bf941ac0cb1c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [be0b5402930c1ec85b6c1e1b26d2c9ae3690ece4afc292821db305a7153157e1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-789000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-789000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=embed-certs-789000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T20_35_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 20:35:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-789000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 20:51:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 20:51:10 +0000   Thu, 05 Dec 2024 20:35:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 20:51:10 +0000   Thu, 05 Dec 2024 20:35:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 20:51:10 +0000   Thu, 05 Dec 2024 20:35:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 20:51:10 +0000   Thu, 05 Dec 2024 20:35:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.200
	  Hostname:    embed-certs-789000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 14bd481cd3474e2db5e4383ceddf4f11
	  System UUID:                14bd481c-d347-4e2d-b5e4-383ceddf4f11
	  Boot ID:                    8a1a0da2-2faa-4c95-9a90-12d042e0f521
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-6mp2h                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-7c65d6cfc9-rh6pj                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-embed-certs-789000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kube-apiserver-embed-certs-789000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-embed-certs-789000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-znjpk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-embed-certs-789000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-6867b74b74-cs42k               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         15m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node embed-certs-789000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node embed-certs-789000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node embed-certs-789000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node embed-certs-789000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node embed-certs-789000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node embed-certs-789000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                node-controller  Node embed-certs-789000 event: Registered Node embed-certs-789000 in Controller
	
	
	==> dmesg <==
	[  +0.052501] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041960] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.965205] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.772504] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.648694] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.286428] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.057035] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075050] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.179907] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.169729] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.317023] systemd-fstab-generator[708]: Ignoring "noauto" option for root device
	[  +4.500262] systemd-fstab-generator[798]: Ignoring "noauto" option for root device
	[  +0.067306] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.101300] systemd-fstab-generator[918]: Ignoring "noauto" option for root device
	[  +4.578459] kauditd_printk_skb: 97 callbacks suppressed
	[Dec 5 20:31] kauditd_printk_skb: 85 callbacks suppressed
	[Dec 5 20:35] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.706956] systemd-fstab-generator[2617]: Ignoring "noauto" option for root device
	[  +4.576632] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.484793] systemd-fstab-generator[2940]: Ignoring "noauto" option for root device
	[  +5.362620] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.098938] systemd-fstab-generator[3091]: Ignoring "noauto" option for root device
	[  +4.974915] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [2ab61f60cbe2c60f64877fe93a63f225ae437aa73fa09c4e1c45066805ed0c55] <==
	{"level":"info","ts":"2024-12-05T20:35:37.529989Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T20:35:37.530771Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-05T20:35:37.530904Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1d37198946ef4128","local-member-id":"fe8c4457455e3a5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T20:35:37.530995Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T20:35:37.531038Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T20:35:37.540714Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T20:35:37.541479Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.200:2379"}
	{"level":"info","ts":"2024-12-05T20:45:37.980592Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":724}
	{"level":"info","ts":"2024-12-05T20:45:37.990994Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":724,"took":"10.072082ms","hash":3060925740,"current-db-size-bytes":2342912,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2342912,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-12-05T20:45:37.991069Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3060925740,"revision":724,"compact-revision":-1}
	{"level":"warn","ts":"2024-12-05T20:50:20.763544Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.396315ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T20:50:20.764085Z","caller":"traceutil/trace.go:171","msg":"trace[1196461486] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1195; }","duration":"132.981728ms","start":"2024-12-05T20:50:20.631030Z","end":"2024-12-05T20:50:20.764012Z","steps":["trace[1196461486] 'range keys from in-memory index tree'  (duration: 132.29095ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T20:50:21.059972Z","caller":"traceutil/trace.go:171","msg":"trace[1941749252] linearizableReadLoop","detail":"{readStateIndex:1389; appliedIndex:1388; }","duration":"295.741524ms","start":"2024-12-05T20:50:20.764204Z","end":"2024-12-05T20:50:21.059946Z","steps":["trace[1941749252] 'read index received'  (duration: 295.539627ms)","trace[1941749252] 'applied index is now lower than readState.Index'  (duration: 201.408µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-05T20:50:21.060108Z","caller":"traceutil/trace.go:171","msg":"trace[1153710187] transaction","detail":"{read_only:false; response_revision:1196; number_of_response:1; }","duration":"343.138577ms","start":"2024-12-05T20:50:20.716956Z","end":"2024-12-05T20:50:21.060095Z","steps":["trace[1153710187] 'process raft request'  (duration: 342.881581ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T20:50:21.060471Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"296.280619ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T20:50:21.061361Z","caller":"traceutil/trace.go:171","msg":"trace[2064116900] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1196; }","duration":"297.17185ms","start":"2024-12-05T20:50:20.764170Z","end":"2024-12-05T20:50:21.061342Z","steps":["trace[2064116900] 'agreement among raft nodes before linearized reading'  (duration: 296.272423ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T20:50:21.060625Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T20:50:20.716940Z","time spent":"343.190716ms","remote":"127.0.0.1:59732","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":561,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-789000\" mod_revision:1188 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-789000\" value_size:502 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-789000\" > >"}
	{"level":"warn","ts":"2024-12-05T20:50:21.743910Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.982543ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16403679501110173623 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.200\" mod_revision:1189 > success:<request_put:<key:\"/registry/masterleases/192.168.39.200\" value_size:67 lease:7180307464255397813 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.200\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-12-05T20:50:21.744055Z","caller":"traceutil/trace.go:171","msg":"trace[715339822] linearizableReadLoop","detail":"{readStateIndex:1391; appliedIndex:1390; }","duration":"112.99192ms","start":"2024-12-05T20:50:21.631053Z","end":"2024-12-05T20:50:21.744045Z","steps":["trace[715339822] 'read index received'  (duration: 30.901µs)","trace[715339822] 'applied index is now lower than readState.Index'  (duration: 112.959779ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-05T20:50:21.744127Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.070949ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T20:50:21.744142Z","caller":"traceutil/trace.go:171","msg":"trace[1711656194] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1197; }","duration":"113.090544ms","start":"2024-12-05T20:50:21.631046Z","end":"2024-12-05T20:50:21.744137Z","steps":["trace[1711656194] 'agreement among raft nodes before linearized reading'  (duration: 113.034087ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T20:50:21.744316Z","caller":"traceutil/trace.go:171","msg":"trace[1766221933] transaction","detail":"{read_only:false; response_revision:1197; number_of_response:1; }","duration":"258.407511ms","start":"2024-12-05T20:50:21.485900Z","end":"2024-12-05T20:50:21.744307Z","steps":["trace[1766221933] 'process raft request'  (duration: 127.810519ms)","trace[1766221933] 'compare'  (duration: 129.851315ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-05T20:50:37.989569Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":966}
	{"level":"info","ts":"2024-12-05T20:50:38.002759Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":966,"took":"11.594858ms","hash":920415334,"current-db-size-bytes":2342912,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1617920,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-12-05T20:50:38.002913Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":920415334,"revision":966,"compact-revision":724}
	
	
	==> kernel <==
	 20:51:40 up 21 min,  0 users,  load average: 0.19, 0.28, 0.22
	Linux embed-certs-789000 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [217f0ccb4526a1479f5a4dab73685aef7aba41064c97a4b142e0b617c510b39c] <==
	I1205 20:46:40.735485       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 20:46:40.735529       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 20:48:40.736321       1 handler_proxy.go:99] no RequestInfo found in the context
	W1205 20:48:40.736321       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 20:48:40.736601       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1205 20:48:40.736685       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1205 20:48:40.737977       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 20:48:40.738006       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 20:50:39.735265       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 20:50:39.735488       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1205 20:50:40.737826       1 handler_proxy.go:99] no RequestInfo found in the context
	W1205 20:50:40.737970       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 20:50:40.738013       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1205 20:50:40.738151       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1205 20:50:40.739211       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 20:50:40.739223       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [3c6cf0dd68ac2fdf5f39b36f0c8463645f13569af5dfd13d8db86ce45446171a] <==
	W1205 20:35:32.266632       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.276571       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.329791       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.346490       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.378235       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.508014       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.525936       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.563902       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.596833       1 logging.go:55] [core] [Channel #15 SubChannel #17]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.627285       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.642488       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.647051       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.700895       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.716652       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.716783       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.833345       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.858843       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.888650       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.910121       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.918976       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:32.975857       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:33.002664       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:33.097869       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:33.195828       1 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:35:33.293851       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [f8c198cf9812acaa3935b068fb6be235089141a68ab9f7163d841c3efd8f50de] <==
	E1205 20:46:16.827114       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:46:17.276962       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:46:46.834889       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:46:47.286559       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 20:47:07.356743       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="192.441µs"
	E1205 20:47:16.841978       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:47:17.294993       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 20:47:18.360637       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="163.879µs"
	E1205 20:47:46.849303       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:47:47.305350       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:48:16.855542       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:48:17.314050       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:48:46.862946       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:48:47.322510       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:49:16.870056       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:49:17.329863       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:49:46.880205       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:49:47.338839       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:50:16.888469       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:50:17.350087       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:50:46.896161       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:50:47.359291       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 20:51:10.734151       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-789000"
	E1205 20:51:16.902774       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:51:17.370985       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [48332dd170fb8de47499e97df295d87bd84b9b1168d8de60ad34389930087b21] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 20:35:48.011075       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1205 20:35:48.034284       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.200"]
	E1205 20:35:48.034437       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 20:35:48.165753       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 20:35:48.165799       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 20:35:48.165834       1 server_linux.go:169] "Using iptables Proxier"
	I1205 20:35:48.172474       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 20:35:48.172821       1 server.go:483] "Version info" version="v1.31.2"
	I1205 20:35:48.172851       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:35:48.174526       1 config.go:199] "Starting service config controller"
	I1205 20:35:48.174570       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 20:35:48.174617       1 config.go:105] "Starting endpoint slice config controller"
	I1205 20:35:48.174625       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 20:35:48.179107       1 config.go:328] "Starting node config controller"
	I1205 20:35:48.179230       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 20:35:48.275928       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 20:35:48.275991       1 shared_informer.go:320] Caches are synced for service config
	I1205 20:35:48.283499       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6eb5ec3a2c4a30d4f224867a4150377f1dbee0b64a84ec5b60995cbad230dbd0] <==
	W1205 20:35:39.739825       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 20:35:39.740291       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1205 20:35:40.554854       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 20:35:40.554916       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1205 20:35:40.602749       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 20:35:40.602857       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:35:40.672994       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1205 20:35:40.673032       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:35:40.747640       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 20:35:40.747694       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1205 20:35:40.754126       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 20:35:40.754180       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:35:40.856235       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 20:35:40.856286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:35:40.903042       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 20:35:40.903195       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:35:40.916593       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 20:35:40.916724       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:35:40.962609       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 20:35:40.962644       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:35:40.964705       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 20:35:40.964753       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:35:41.062198       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 20:35:41.062845       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1205 20:35:42.930209       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 05 20:50:32 embed-certs-789000 kubelet[2947]: E1205 20:50:32.580268    2947 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431832579124984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:50:32 embed-certs-789000 kubelet[2947]: E1205 20:50:32.580851    2947 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431832579124984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:50:33 embed-certs-789000 kubelet[2947]: E1205 20:50:33.340169    2947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cs42k" podUID="98b266c3-8ff0-4dc6-9c43-374dcd7c074a"
	Dec 05 20:50:42 embed-certs-789000 kubelet[2947]: E1205 20:50:42.377683    2947 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 05 20:50:42 embed-certs-789000 kubelet[2947]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 05 20:50:42 embed-certs-789000 kubelet[2947]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 20:50:42 embed-certs-789000 kubelet[2947]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 20:50:42 embed-certs-789000 kubelet[2947]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 20:50:42 embed-certs-789000 kubelet[2947]: E1205 20:50:42.583908    2947 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431842583074471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:50:42 embed-certs-789000 kubelet[2947]: E1205 20:50:42.583937    2947 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431842583074471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:50:46 embed-certs-789000 kubelet[2947]: E1205 20:50:46.340884    2947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cs42k" podUID="98b266c3-8ff0-4dc6-9c43-374dcd7c074a"
	Dec 05 20:50:52 embed-certs-789000 kubelet[2947]: E1205 20:50:52.585671    2947 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431852585110309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:50:52 embed-certs-789000 kubelet[2947]: E1205 20:50:52.586577    2947 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431852585110309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:51:00 embed-certs-789000 kubelet[2947]: E1205 20:51:00.339806    2947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cs42k" podUID="98b266c3-8ff0-4dc6-9c43-374dcd7c074a"
	Dec 05 20:51:02 embed-certs-789000 kubelet[2947]: E1205 20:51:02.589461    2947 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431862588904380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:51:02 embed-certs-789000 kubelet[2947]: E1205 20:51:02.590066    2947 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431862588904380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:51:12 embed-certs-789000 kubelet[2947]: E1205 20:51:12.341587    2947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cs42k" podUID="98b266c3-8ff0-4dc6-9c43-374dcd7c074a"
	Dec 05 20:51:12 embed-certs-789000 kubelet[2947]: E1205 20:51:12.592454    2947 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431872591844582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:51:12 embed-certs-789000 kubelet[2947]: E1205 20:51:12.592496    2947 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431872591844582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:51:22 embed-certs-789000 kubelet[2947]: E1205 20:51:22.594460    2947 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431882593752269,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:51:22 embed-certs-789000 kubelet[2947]: E1205 20:51:22.595067    2947 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431882593752269,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:51:26 embed-certs-789000 kubelet[2947]: E1205 20:51:26.340804    2947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cs42k" podUID="98b266c3-8ff0-4dc6-9c43-374dcd7c074a"
	Dec 05 20:51:32 embed-certs-789000 kubelet[2947]: E1205 20:51:32.597167    2947 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431892596733360,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:51:32 embed-certs-789000 kubelet[2947]: E1205 20:51:32.597205    2947 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431892596733360,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:51:39 embed-certs-789000 kubelet[2947]: E1205 20:51:39.339581    2947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-cs42k" podUID="98b266c3-8ff0-4dc6-9c43-374dcd7c074a"
	
	
	==> storage-provisioner [b3590f9508a3b09c552a77ad99852b72a135a2ec395476bf71cac9cba129609b] <==
	I1205 20:35:49.596428       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 20:35:49.626633       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 20:35:49.626819       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 20:35:49.644774       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 20:35:49.645582       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc39230d-60a9-4f43-90b2-51b526f81b18", APIVersion:"v1", ResourceVersion:"437", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-789000_3b797468-49ed-4acf-b247-e4982cdae2fa became leader
	I1205 20:35:49.645631       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-789000_3b797468-49ed-4acf-b247-e4982cdae2fa!
	I1205 20:35:49.746769       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-789000_3b797468-49ed-4acf-b247-e4982cdae2fa!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-789000 -n embed-certs-789000
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-789000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-cs42k
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-789000 describe pod metrics-server-6867b74b74-cs42k
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-789000 describe pod metrics-server-6867b74b74-cs42k: exit status 1 (104.95531ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-cs42k" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-789000 describe pod metrics-server-6867b74b74-cs42k: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (397.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (488.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-942599 -n default-k8s-diff-port-942599
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-12-05 20:53:53.009322944 +0000 UTC m=+6725.214943282
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-942599 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-942599 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.047µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-942599 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-942599 -n default-k8s-diff-port-942599
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-942599 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-942599 logs -n 25: (1.54072114s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-383287 sudo ip r s                        | flannel-383287 | jenkins | v1.34.0 | 05 Dec 24 20:53 UTC | 05 Dec 24 20:53 UTC |
	| ssh     | -p flannel-383287 sudo                               | flannel-383287 | jenkins | v1.34.0 | 05 Dec 24 20:53 UTC | 05 Dec 24 20:53 UTC |
	|         | iptables-save                                        |                |         |         |                     |                     |
	| ssh     | -p flannel-383287 sudo                               | flannel-383287 | jenkins | v1.34.0 | 05 Dec 24 20:53 UTC | 05 Dec 24 20:53 UTC |
	|         | iptables -t nat -L -n -v                             |                |         |         |                     |                     |
	| ssh     | -p flannel-383287 sudo cat                           | flannel-383287 | jenkins | v1.34.0 | 05 Dec 24 20:53 UTC | 05 Dec 24 20:53 UTC |
	|         | /run/flannel/subnet.env                              |                |         |         |                     |                     |
	| ssh     | -p flannel-383287 sudo cat                           | flannel-383287 | jenkins | v1.34.0 | 05 Dec 24 20:53 UTC |                     |
	|         | /etc/kube-flannel/cni-conf.json                      |                |         |         |                     |                     |
	| ssh     | -p flannel-383287 sudo                               | flannel-383287 | jenkins | v1.34.0 | 05 Dec 24 20:53 UTC | 05 Dec 24 20:53 UTC |
	|         | systemctl status kubelet --all                       |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p flannel-383287 sudo                               | flannel-383287 | jenkins | v1.34.0 | 05 Dec 24 20:53 UTC | 05 Dec 24 20:53 UTC |
	|         | systemctl cat kubelet                                |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-383287 sudo                               | flannel-383287 | jenkins | v1.34.0 | 05 Dec 24 20:53 UTC | 05 Dec 24 20:53 UTC |
	|         | journalctl -xeu kubelet --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p flannel-383287 sudo cat                           | flannel-383287 | jenkins | v1.34.0 | 05 Dec 24 20:53 UTC | 05 Dec 24 20:53 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p flannel-383287 sudo cat                           | flannel-383287 | jenkins | v1.34.0 | 05 Dec 24 20:53 UTC | 05 Dec 24 20:53 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| ssh     | -p flannel-383287 sudo                               | flannel-383287 | jenkins | v1.34.0 | 05 Dec 24 20:53 UTC |                     |
	|         | systemctl status docker --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p flannel-383287 sudo                               | flannel-383287 | jenkins | v1.34.0 | 05 Dec 24 20:53 UTC | 05 Dec 24 20:53 UTC |
	|         | systemctl cat docker                                 |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-383287 sudo cat                           | flannel-383287 | jenkins | v1.34.0 | 05 Dec 24 20:53 UTC | 05 Dec 24 20:53 UTC |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p flannel-383287 sudo docker                        | flannel-383287 | jenkins | v1.34.0 | 05 Dec 24 20:53 UTC |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p flannel-383287 sudo                               | flannel-383287 | jenkins | v1.34.0 | 05 Dec 24 20:53 UTC |                     |
	|         | systemctl status cri-docker                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p flannel-383287 sudo                               | flannel-383287 | jenkins | v1.34.0 | 05 Dec 24 20:53 UTC | 05 Dec 24 20:53 UTC |
	|         | systemctl cat cri-docker                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-383287 sudo cat                           | flannel-383287 | jenkins | v1.34.0 | 05 Dec 24 20:53 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p flannel-383287 sudo cat                           | flannel-383287 | jenkins | v1.34.0 | 05 Dec 24 20:53 UTC | 05 Dec 24 20:53 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p flannel-383287 sudo                               | flannel-383287 | jenkins | v1.34.0 | 05 Dec 24 20:53 UTC | 05 Dec 24 20:53 UTC |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p flannel-383287 sudo                               | flannel-383287 | jenkins | v1.34.0 | 05 Dec 24 20:53 UTC |                     |
	|         | systemctl status containerd                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p flannel-383287 sudo                               | flannel-383287 | jenkins | v1.34.0 | 05 Dec 24 20:53 UTC | 05 Dec 24 20:53 UTC |
	|         | systemctl cat containerd                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p flannel-383287 sudo cat                           | flannel-383287 | jenkins | v1.34.0 | 05 Dec 24 20:53 UTC | 05 Dec 24 20:53 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p flannel-383287 sudo cat                           | flannel-383287 | jenkins | v1.34.0 | 05 Dec 24 20:53 UTC | 05 Dec 24 20:53 UTC |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p flannel-383287 sudo                               | flannel-383287 | jenkins | v1.34.0 | 05 Dec 24 20:53 UTC | 05 Dec 24 20:53 UTC |
	|         | containerd config dump                               |                |         |         |                     |                     |
	| ssh     | -p flannel-383287 sudo                               | flannel-383287 | jenkins | v1.34.0 | 05 Dec 24 20:53 UTC |                     |
	|         | systemctl status crio --all                          |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 20:53:13
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:53:13.464481  597303 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:53:13.464625  597303 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:53:13.464638  597303 out.go:358] Setting ErrFile to fd 2...
	I1205 20:53:13.464646  597303 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:53:13.465265  597303 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 20:53:13.466531  597303 out.go:352] Setting JSON to false
	I1205 20:53:13.467752  597303 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":12939,"bootTime":1733419054,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:53:13.467870  597303 start.go:139] virtualization: kvm guest
	I1205 20:53:13.469793  597303 out.go:177] * [calico-383287] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:53:13.471597  597303 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 20:53:13.471635  597303 notify.go:220] Checking for updates...
	I1205 20:53:13.473915  597303 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:53:13.475255  597303 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:53:13.476523  597303 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 20:53:13.477908  597303 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:53:13.479319  597303 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:53:13.481091  597303 config.go:182] Loaded profile config "bridge-383287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:53:13.481215  597303 config.go:182] Loaded profile config "default-k8s-diff-port-942599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:53:13.481329  597303 config.go:182] Loaded profile config "flannel-383287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:53:13.481438  597303 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:53:13.520728  597303 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 20:53:13.521974  597303 start.go:297] selected driver: kvm2
	I1205 20:53:13.521988  597303 start.go:901] validating driver "kvm2" against <nil>
	I1205 20:53:13.522000  597303 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:53:13.522826  597303 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:53:13.522929  597303 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20052-530897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:53:13.539206  597303 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 20:53:13.539277  597303 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 20:53:13.539600  597303 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:53:13.539648  597303 cni.go:84] Creating CNI manager for "calico"
	I1205 20:53:13.539656  597303 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I1205 20:53:13.539722  597303 start.go:340] cluster config:
	{Name:calico-383287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:calico-383287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:53:13.539867  597303 iso.go:125] acquiring lock: {Name:mk778929df466edaca8cb6d38427acedfae32b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:53:13.541773  597303 out.go:177] * Starting "calico-383287" primary control-plane node in "calico-383287" cluster
	I1205 20:53:09.365402  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:09.365923  596117 main.go:141] libmachine: (bridge-383287) DBG | unable to find current IP address of domain bridge-383287 in network mk-bridge-383287
	I1205 20:53:09.365955  596117 main.go:141] libmachine: (bridge-383287) DBG | I1205 20:53:09.365854  596163 retry.go:31] will retry after 1.09772752s: waiting for machine to come up
	I1205 20:53:10.465591  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:10.466129  596117 main.go:141] libmachine: (bridge-383287) DBG | unable to find current IP address of domain bridge-383287 in network mk-bridge-383287
	I1205 20:53:10.466172  596117 main.go:141] libmachine: (bridge-383287) DBG | I1205 20:53:10.466119  596163 retry.go:31] will retry after 1.192702262s: waiting for machine to come up
	I1205 20:53:11.659957  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:11.660468  596117 main.go:141] libmachine: (bridge-383287) DBG | unable to find current IP address of domain bridge-383287 in network mk-bridge-383287
	I1205 20:53:11.660500  596117 main.go:141] libmachine: (bridge-383287) DBG | I1205 20:53:11.660418  596163 retry.go:31] will retry after 1.667481061s: waiting for machine to come up
	I1205 20:53:13.329471  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:13.329961  596117 main.go:141] libmachine: (bridge-383287) DBG | unable to find current IP address of domain bridge-383287 in network mk-bridge-383287
	I1205 20:53:13.329993  596117 main.go:141] libmachine: (bridge-383287) DBG | I1205 20:53:13.329903  596163 retry.go:31] will retry after 2.225844453s: waiting for machine to come up
	I1205 20:53:13.896981  593786 pod_ready.go:103] pod "coredns-7c65d6cfc9-dxbbd" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:15.898553  593786 pod_ready.go:103] pod "coredns-7c65d6cfc9-dxbbd" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:13.543513  597303 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:53:13.543571  597303 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 20:53:13.543586  597303 cache.go:56] Caching tarball of preloaded images
	I1205 20:53:13.543688  597303 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:53:13.543701  597303 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 20:53:13.543827  597303 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/calico-383287/config.json ...
	I1205 20:53:13.543853  597303 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/calico-383287/config.json: {Name:mkf09618cc2d8f733f972848de74e23c8d7d5989 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:53:13.544026  597303 start.go:360] acquireMachinesLock for calico-383287: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:53:15.557870  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:15.558360  596117 main.go:141] libmachine: (bridge-383287) DBG | unable to find current IP address of domain bridge-383287 in network mk-bridge-383287
	I1205 20:53:15.558405  596117 main.go:141] libmachine: (bridge-383287) DBG | I1205 20:53:15.558303  596163 retry.go:31] will retry after 2.503823102s: waiting for machine to come up
	I1205 20:53:18.064153  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:18.064866  596117 main.go:141] libmachine: (bridge-383287) DBG | unable to find current IP address of domain bridge-383287 in network mk-bridge-383287
	I1205 20:53:18.064897  596117 main.go:141] libmachine: (bridge-383287) DBG | I1205 20:53:18.064813  596163 retry.go:31] will retry after 3.015044174s: waiting for machine to come up
	I1205 20:53:17.903038  593786 pod_ready.go:103] pod "coredns-7c65d6cfc9-dxbbd" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:19.396475  593786 pod_ready.go:93] pod "coredns-7c65d6cfc9-dxbbd" in "kube-system" namespace has status "Ready":"True"
	I1205 20:53:19.396501  593786 pod_ready.go:82] duration metric: took 15.00639902s for pod "coredns-7c65d6cfc9-dxbbd" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:19.396512  593786 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-383287" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:19.401367  593786 pod_ready.go:93] pod "etcd-flannel-383287" in "kube-system" namespace has status "Ready":"True"
	I1205 20:53:19.401391  593786 pod_ready.go:82] duration metric: took 4.872369ms for pod "etcd-flannel-383287" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:19.401402  593786 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-383287" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:19.405627  593786 pod_ready.go:93] pod "kube-apiserver-flannel-383287" in "kube-system" namespace has status "Ready":"True"
	I1205 20:53:19.405647  593786 pod_ready.go:82] duration metric: took 4.239101ms for pod "kube-apiserver-flannel-383287" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:19.405657  593786 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-383287" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:19.410008  593786 pod_ready.go:93] pod "kube-controller-manager-flannel-383287" in "kube-system" namespace has status "Ready":"True"
	I1205 20:53:19.410033  593786 pod_ready.go:82] duration metric: took 4.369206ms for pod "kube-controller-manager-flannel-383287" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:19.410045  593786 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-frzpl" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:19.414314  593786 pod_ready.go:93] pod "kube-proxy-frzpl" in "kube-system" namespace has status "Ready":"True"
	I1205 20:53:19.414342  593786 pod_ready.go:82] duration metric: took 4.289832ms for pod "kube-proxy-frzpl" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:19.414354  593786 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-383287" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:19.794207  593786 pod_ready.go:93] pod "kube-scheduler-flannel-383287" in "kube-system" namespace has status "Ready":"True"
	I1205 20:53:19.794232  593786 pod_ready.go:82] duration metric: took 379.869818ms for pod "kube-scheduler-flannel-383287" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:19.794245  593786 pod_ready.go:39] duration metric: took 15.422043778s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:53:19.794262  593786 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:53:19.794317  593786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:53:19.810633  593786 api_server.go:72] duration metric: took 25.405090448s to wait for apiserver process to appear ...
	I1205 20:53:19.810664  593786 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:53:19.810686  593786 api_server.go:253] Checking apiserver healthz at https://192.168.39.186:8443/healthz ...
	I1205 20:53:19.815935  593786 api_server.go:279] https://192.168.39.186:8443/healthz returned 200:
	ok
	I1205 20:53:19.816960  593786 api_server.go:141] control plane version: v1.31.2
	I1205 20:53:19.816986  593786 api_server.go:131] duration metric: took 6.315912ms to wait for apiserver health ...
	I1205 20:53:19.816993  593786 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:53:19.997255  593786 system_pods.go:59] 7 kube-system pods found
	I1205 20:53:19.997289  593786 system_pods.go:61] "coredns-7c65d6cfc9-dxbbd" [c68dd1cf-28d6-4541-87cf-7bdd979caa9a] Running
	I1205 20:53:19.997294  593786 system_pods.go:61] "etcd-flannel-383287" [41269c5f-1165-4efd-9688-380f4ad4acbc] Running
	I1205 20:53:19.997297  593786 system_pods.go:61] "kube-apiserver-flannel-383287" [04f69dad-bcb2-4ff9-ac05-9719237da7d7] Running
	I1205 20:53:19.997301  593786 system_pods.go:61] "kube-controller-manager-flannel-383287" [ad4036f6-1217-40b7-9c27-840ec1ec154a] Running
	I1205 20:53:19.997306  593786 system_pods.go:61] "kube-proxy-frzpl" [c9f63018-3919-4ac4-8a5a-61e2b1b7e1b9] Running
	I1205 20:53:19.997311  593786 system_pods.go:61] "kube-scheduler-flannel-383287" [111e562f-daab-4582-833d-024bf186cd4c] Running
	I1205 20:53:19.997314  593786 system_pods.go:61] "storage-provisioner" [a8c0e830-84c3-4e57-9a25-b2e3921163c0] Running
	I1205 20:53:19.997327  593786 system_pods.go:74] duration metric: took 180.320992ms to wait for pod list to return data ...
	I1205 20:53:19.997335  593786 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:53:20.194866  593786 default_sa.go:45] found service account: "default"
	I1205 20:53:20.194896  593786 default_sa.go:55] duration metric: took 197.554876ms for default service account to be created ...
	I1205 20:53:20.194906  593786 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:53:20.397557  593786 system_pods.go:86] 7 kube-system pods found
	I1205 20:53:20.397593  593786 system_pods.go:89] "coredns-7c65d6cfc9-dxbbd" [c68dd1cf-28d6-4541-87cf-7bdd979caa9a] Running
	I1205 20:53:20.397601  593786 system_pods.go:89] "etcd-flannel-383287" [41269c5f-1165-4efd-9688-380f4ad4acbc] Running
	I1205 20:53:20.397607  593786 system_pods.go:89] "kube-apiserver-flannel-383287" [04f69dad-bcb2-4ff9-ac05-9719237da7d7] Running
	I1205 20:53:20.397613  593786 system_pods.go:89] "kube-controller-manager-flannel-383287" [ad4036f6-1217-40b7-9c27-840ec1ec154a] Running
	I1205 20:53:20.397618  593786 system_pods.go:89] "kube-proxy-frzpl" [c9f63018-3919-4ac4-8a5a-61e2b1b7e1b9] Running
	I1205 20:53:20.397623  593786 system_pods.go:89] "kube-scheduler-flannel-383287" [111e562f-daab-4582-833d-024bf186cd4c] Running
	I1205 20:53:20.397627  593786 system_pods.go:89] "storage-provisioner" [a8c0e830-84c3-4e57-9a25-b2e3921163c0] Running
	I1205 20:53:20.397637  593786 system_pods.go:126] duration metric: took 202.723709ms to wait for k8s-apps to be running ...
	I1205 20:53:20.397647  593786 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:53:20.397701  593786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:53:20.412829  593786 system_svc.go:56] duration metric: took 15.169507ms WaitForService to wait for kubelet
	I1205 20:53:20.412866  593786 kubeadm.go:582] duration metric: took 26.007328073s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:53:20.412891  593786 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:53:20.594472  593786 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:53:20.594505  593786 node_conditions.go:123] node cpu capacity is 2
	I1205 20:53:20.594518  593786 node_conditions.go:105] duration metric: took 181.621539ms to run NodePressure ...
	I1205 20:53:20.594528  593786 start.go:241] waiting for startup goroutines ...
	I1205 20:53:20.594535  593786 start.go:246] waiting for cluster config update ...
	I1205 20:53:20.594545  593786 start.go:255] writing updated cluster config ...
	I1205 20:53:20.594794  593786 ssh_runner.go:195] Run: rm -f paused
	I1205 20:53:20.644471  593786 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 20:53:20.646962  593786 out.go:177] * Done! kubectl is now configured to use "flannel-383287" cluster and "default" namespace by default
	I1205 20:53:21.082105  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:21.082603  596117 main.go:141] libmachine: (bridge-383287) DBG | unable to find current IP address of domain bridge-383287 in network mk-bridge-383287
	I1205 20:53:21.082624  596117 main.go:141] libmachine: (bridge-383287) DBG | I1205 20:53:21.082557  596163 retry.go:31] will retry after 4.03947355s: waiting for machine to come up
	I1205 20:53:25.123626  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:25.124031  596117 main.go:141] libmachine: (bridge-383287) DBG | unable to find current IP address of domain bridge-383287 in network mk-bridge-383287
	I1205 20:53:25.124077  596117 main.go:141] libmachine: (bridge-383287) DBG | I1205 20:53:25.124009  596163 retry.go:31] will retry after 4.120216687s: waiting for machine to come up
	I1205 20:53:29.245706  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:29.246165  596117 main.go:141] libmachine: (bridge-383287) Found IP for machine: 192.168.72.138
	I1205 20:53:29.246190  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has current primary IP address 192.168.72.138 and MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:29.246199  596117 main.go:141] libmachine: (bridge-383287) Reserving static IP address...
	I1205 20:53:29.246575  596117 main.go:141] libmachine: (bridge-383287) DBG | unable to find host DHCP lease matching {name: "bridge-383287", mac: "52:54:00:92:d9:79", ip: "192.168.72.138"} in network mk-bridge-383287
	I1205 20:53:29.331189  596117 main.go:141] libmachine: (bridge-383287) DBG | Getting to WaitForSSH function...
	I1205 20:53:29.331218  596117 main.go:141] libmachine: (bridge-383287) Reserved static IP address: 192.168.72.138
	I1205 20:53:29.331233  596117 main.go:141] libmachine: (bridge-383287) Waiting for SSH to be available...
	I1205 20:53:29.334142  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:29.334445  596117 main.go:141] libmachine: (bridge-383287) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:92:d9:79", ip: ""} in network mk-bridge-383287
	I1205 20:53:29.334473  596117 main.go:141] libmachine: (bridge-383287) DBG | unable to find defined IP address of network mk-bridge-383287 interface with MAC address 52:54:00:92:d9:79
	I1205 20:53:29.334651  596117 main.go:141] libmachine: (bridge-383287) DBG | Using SSH client type: external
	I1205 20:53:29.334678  596117 main.go:141] libmachine: (bridge-383287) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/bridge-383287/id_rsa (-rw-------)
	I1205 20:53:29.334716  596117 main.go:141] libmachine: (bridge-383287) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/bridge-383287/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:53:29.334731  596117 main.go:141] libmachine: (bridge-383287) DBG | About to run SSH command:
	I1205 20:53:29.334748  596117 main.go:141] libmachine: (bridge-383287) DBG | exit 0
	I1205 20:53:29.338748  596117 main.go:141] libmachine: (bridge-383287) DBG | SSH cmd err, output: exit status 255: 
	I1205 20:53:29.338773  596117 main.go:141] libmachine: (bridge-383287) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1205 20:53:29.338783  596117 main.go:141] libmachine: (bridge-383287) DBG | command : exit 0
	I1205 20:53:29.338792  596117 main.go:141] libmachine: (bridge-383287) DBG | err     : exit status 255
	I1205 20:53:29.338803  596117 main.go:141] libmachine: (bridge-383287) DBG | output  : 
	I1205 20:53:33.789990  597303 start.go:364] duration metric: took 20.245922789s to acquireMachinesLock for "calico-383287"
	I1205 20:53:33.790071  597303 start.go:93] Provisioning new machine with config: &{Name:calico-383287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:calico-383287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:53:33.790237  597303 start.go:125] createHost starting for "" (driver="kvm2")
	I1205 20:53:32.338989  596117 main.go:141] libmachine: (bridge-383287) DBG | Getting to WaitForSSH function...
	I1205 20:53:32.341762  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:32.342224  596117 main.go:141] libmachine: (bridge-383287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:d9:79", ip: ""} in network mk-bridge-383287: {Iface:virbr4 ExpiryTime:2024-12-05 21:53:21 +0000 UTC Type:0 Mac:52:54:00:92:d9:79 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-383287 Clientid:01:52:54:00:92:d9:79}
	I1205 20:53:32.342252  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined IP address 192.168.72.138 and MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:32.342448  596117 main.go:141] libmachine: (bridge-383287) DBG | Using SSH client type: external
	I1205 20:53:32.342472  596117 main.go:141] libmachine: (bridge-383287) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/bridge-383287/id_rsa (-rw-------)
	I1205 20:53:32.342510  596117 main.go:141] libmachine: (bridge-383287) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.138 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/bridge-383287/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:53:32.342536  596117 main.go:141] libmachine: (bridge-383287) DBG | About to run SSH command:
	I1205 20:53:32.342571  596117 main.go:141] libmachine: (bridge-383287) DBG | exit 0
	I1205 20:53:32.465115  596117 main.go:141] libmachine: (bridge-383287) DBG | SSH cmd err, output: <nil>: 
	I1205 20:53:32.465443  596117 main.go:141] libmachine: (bridge-383287) KVM machine creation complete!
	I1205 20:53:32.465846  596117 main.go:141] libmachine: (bridge-383287) Calling .GetConfigRaw
	I1205 20:53:32.466629  596117 main.go:141] libmachine: (bridge-383287) Calling .DriverName
	I1205 20:53:32.466847  596117 main.go:141] libmachine: (bridge-383287) Calling .DriverName
	I1205 20:53:32.467022  596117 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 20:53:32.467040  596117 main.go:141] libmachine: (bridge-383287) Calling .GetState
	I1205 20:53:32.468345  596117 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 20:53:32.468360  596117 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 20:53:32.468373  596117 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 20:53:32.468379  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHHostname
	I1205 20:53:32.470766  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:32.471266  596117 main.go:141] libmachine: (bridge-383287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:d9:79", ip: ""} in network mk-bridge-383287: {Iface:virbr4 ExpiryTime:2024-12-05 21:53:21 +0000 UTC Type:0 Mac:52:54:00:92:d9:79 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-383287 Clientid:01:52:54:00:92:d9:79}
	I1205 20:53:32.471317  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined IP address 192.168.72.138 and MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:32.471423  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHPort
	I1205 20:53:32.471610  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHKeyPath
	I1205 20:53:32.471758  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHKeyPath
	I1205 20:53:32.471929  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHUsername
	I1205 20:53:32.472153  596117 main.go:141] libmachine: Using SSH client type: native
	I1205 20:53:32.472383  596117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1205 20:53:32.472395  596117 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 20:53:32.571818  596117 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:53:32.571845  596117 main.go:141] libmachine: Detecting the provisioner...
	I1205 20:53:32.571855  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHHostname
	I1205 20:53:32.574528  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:32.575014  596117 main.go:141] libmachine: (bridge-383287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:d9:79", ip: ""} in network mk-bridge-383287: {Iface:virbr4 ExpiryTime:2024-12-05 21:53:21 +0000 UTC Type:0 Mac:52:54:00:92:d9:79 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-383287 Clientid:01:52:54:00:92:d9:79}
	I1205 20:53:32.575044  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined IP address 192.168.72.138 and MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:32.575203  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHPort
	I1205 20:53:32.575411  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHKeyPath
	I1205 20:53:32.575556  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHKeyPath
	I1205 20:53:32.575739  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHUsername
	I1205 20:53:32.575933  596117 main.go:141] libmachine: Using SSH client type: native
	I1205 20:53:32.576133  596117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1205 20:53:32.576150  596117 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 20:53:32.677523  596117 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 20:53:32.677659  596117 main.go:141] libmachine: found compatible host: buildroot
	I1205 20:53:32.677673  596117 main.go:141] libmachine: Provisioning with buildroot...
	I1205 20:53:32.677682  596117 main.go:141] libmachine: (bridge-383287) Calling .GetMachineName
	I1205 20:53:32.677949  596117 buildroot.go:166] provisioning hostname "bridge-383287"
	I1205 20:53:32.677982  596117 main.go:141] libmachine: (bridge-383287) Calling .GetMachineName
	I1205 20:53:32.678204  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHHostname
	I1205 20:53:32.681285  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:32.681707  596117 main.go:141] libmachine: (bridge-383287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:d9:79", ip: ""} in network mk-bridge-383287: {Iface:virbr4 ExpiryTime:2024-12-05 21:53:21 +0000 UTC Type:0 Mac:52:54:00:92:d9:79 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-383287 Clientid:01:52:54:00:92:d9:79}
	I1205 20:53:32.681730  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined IP address 192.168.72.138 and MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:32.681913  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHPort
	I1205 20:53:32.682238  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHKeyPath
	I1205 20:53:32.682382  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHKeyPath
	I1205 20:53:32.682584  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHUsername
	I1205 20:53:32.682788  596117 main.go:141] libmachine: Using SSH client type: native
	I1205 20:53:32.682973  596117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1205 20:53:32.682985  596117 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-383287 && echo "bridge-383287" | sudo tee /etc/hostname
	I1205 20:53:32.795940  596117 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-383287
	
	I1205 20:53:32.795975  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHHostname
	I1205 20:53:32.798945  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:32.799318  596117 main.go:141] libmachine: (bridge-383287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:d9:79", ip: ""} in network mk-bridge-383287: {Iface:virbr4 ExpiryTime:2024-12-05 21:53:21 +0000 UTC Type:0 Mac:52:54:00:92:d9:79 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-383287 Clientid:01:52:54:00:92:d9:79}
	I1205 20:53:32.799347  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined IP address 192.168.72.138 and MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:32.799518  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHPort
	I1205 20:53:32.799717  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHKeyPath
	I1205 20:53:32.799897  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHKeyPath
	I1205 20:53:32.800006  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHUsername
	I1205 20:53:32.800148  596117 main.go:141] libmachine: Using SSH client type: native
	I1205 20:53:32.800393  596117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1205 20:53:32.800413  596117 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-383287' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-383287/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-383287' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:53:32.914173  596117 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:53:32.914212  596117 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:53:32.914280  596117 buildroot.go:174] setting up certificates
	I1205 20:53:32.914306  596117 provision.go:84] configureAuth start
	I1205 20:53:32.914321  596117 main.go:141] libmachine: (bridge-383287) Calling .GetMachineName
	I1205 20:53:32.914627  596117 main.go:141] libmachine: (bridge-383287) Calling .GetIP
	I1205 20:53:32.917884  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:32.918361  596117 main.go:141] libmachine: (bridge-383287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:d9:79", ip: ""} in network mk-bridge-383287: {Iface:virbr4 ExpiryTime:2024-12-05 21:53:21 +0000 UTC Type:0 Mac:52:54:00:92:d9:79 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-383287 Clientid:01:52:54:00:92:d9:79}
	I1205 20:53:32.918393  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined IP address 192.168.72.138 and MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:32.918601  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHHostname
	I1205 20:53:32.921378  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:32.921802  596117 main.go:141] libmachine: (bridge-383287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:d9:79", ip: ""} in network mk-bridge-383287: {Iface:virbr4 ExpiryTime:2024-12-05 21:53:21 +0000 UTC Type:0 Mac:52:54:00:92:d9:79 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-383287 Clientid:01:52:54:00:92:d9:79}
	I1205 20:53:32.921838  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined IP address 192.168.72.138 and MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:32.921938  596117 provision.go:143] copyHostCerts
	I1205 20:53:32.922006  596117 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:53:32.922028  596117 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:53:32.922120  596117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:53:32.922273  596117 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:53:32.922287  596117 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:53:32.922328  596117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:53:32.922431  596117 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:53:32.922442  596117 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:53:32.922478  596117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:53:32.922564  596117 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.bridge-383287 san=[127.0.0.1 192.168.72.138 bridge-383287 localhost minikube]
	I1205 20:53:33.137724  596117 provision.go:177] copyRemoteCerts
	I1205 20:53:33.137785  596117 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:53:33.137815  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHHostname
	I1205 20:53:33.141210  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:33.141566  596117 main.go:141] libmachine: (bridge-383287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:d9:79", ip: ""} in network mk-bridge-383287: {Iface:virbr4 ExpiryTime:2024-12-05 21:53:21 +0000 UTC Type:0 Mac:52:54:00:92:d9:79 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-383287 Clientid:01:52:54:00:92:d9:79}
	I1205 20:53:33.141596  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined IP address 192.168.72.138 and MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:33.141776  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHPort
	I1205 20:53:33.141970  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHKeyPath
	I1205 20:53:33.142145  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHUsername
	I1205 20:53:33.142307  596117 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/bridge-383287/id_rsa Username:docker}
	I1205 20:53:33.227713  596117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:53:33.255508  596117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 20:53:33.283577  596117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:53:33.311155  596117 provision.go:87] duration metric: took 396.829701ms to configureAuth
	I1205 20:53:33.311187  596117 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:53:33.311404  596117 config.go:182] Loaded profile config "bridge-383287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:53:33.311502  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHHostname
	I1205 20:53:33.314464  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:33.314867  596117 main.go:141] libmachine: (bridge-383287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:d9:79", ip: ""} in network mk-bridge-383287: {Iface:virbr4 ExpiryTime:2024-12-05 21:53:21 +0000 UTC Type:0 Mac:52:54:00:92:d9:79 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-383287 Clientid:01:52:54:00:92:d9:79}
	I1205 20:53:33.314891  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined IP address 192.168.72.138 and MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:33.315121  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHPort
	I1205 20:53:33.315347  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHKeyPath
	I1205 20:53:33.315548  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHKeyPath
	I1205 20:53:33.315699  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHUsername
	I1205 20:53:33.315872  596117 main.go:141] libmachine: Using SSH client type: native
	I1205 20:53:33.316111  596117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1205 20:53:33.316134  596117 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:53:33.545282  596117 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:53:33.545322  596117 main.go:141] libmachine: Checking connection to Docker...
	I1205 20:53:33.545333  596117 main.go:141] libmachine: (bridge-383287) Calling .GetURL
	I1205 20:53:33.546846  596117 main.go:141] libmachine: (bridge-383287) DBG | Using libvirt version 6000000
	I1205 20:53:33.549368  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:33.549735  596117 main.go:141] libmachine: (bridge-383287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:d9:79", ip: ""} in network mk-bridge-383287: {Iface:virbr4 ExpiryTime:2024-12-05 21:53:21 +0000 UTC Type:0 Mac:52:54:00:92:d9:79 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-383287 Clientid:01:52:54:00:92:d9:79}
	I1205 20:53:33.549764  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined IP address 192.168.72.138 and MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:33.549945  596117 main.go:141] libmachine: Docker is up and running!
	I1205 20:53:33.549962  596117 main.go:141] libmachine: Reticulating splines...
	I1205 20:53:33.549970  596117 client.go:171] duration metric: took 29.086433021s to LocalClient.Create
	I1205 20:53:33.549994  596117 start.go:167] duration metric: took 29.086547197s to libmachine.API.Create "bridge-383287"
	I1205 20:53:33.550004  596117 start.go:293] postStartSetup for "bridge-383287" (driver="kvm2")
	I1205 20:53:33.550014  596117 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:53:33.550032  596117 main.go:141] libmachine: (bridge-383287) Calling .DriverName
	I1205 20:53:33.550304  596117 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:53:33.550331  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHHostname
	I1205 20:53:33.552978  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:33.553367  596117 main.go:141] libmachine: (bridge-383287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:d9:79", ip: ""} in network mk-bridge-383287: {Iface:virbr4 ExpiryTime:2024-12-05 21:53:21 +0000 UTC Type:0 Mac:52:54:00:92:d9:79 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-383287 Clientid:01:52:54:00:92:d9:79}
	I1205 20:53:33.553407  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined IP address 192.168.72.138 and MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:33.553523  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHPort
	I1205 20:53:33.553738  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHKeyPath
	I1205 20:53:33.553909  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHUsername
	I1205 20:53:33.554081  596117 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/bridge-383287/id_rsa Username:docker}
	I1205 20:53:33.639851  596117 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:53:33.644923  596117 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:53:33.644961  596117 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:53:33.645043  596117 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:53:33.645141  596117 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:53:33.645262  596117 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:53:33.655575  596117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:53:33.681014  596117 start.go:296] duration metric: took 130.993713ms for postStartSetup
	I1205 20:53:33.681084  596117 main.go:141] libmachine: (bridge-383287) Calling .GetConfigRaw
	I1205 20:53:33.681709  596117 main.go:141] libmachine: (bridge-383287) Calling .GetIP
	I1205 20:53:33.684834  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:33.685233  596117 main.go:141] libmachine: (bridge-383287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:d9:79", ip: ""} in network mk-bridge-383287: {Iface:virbr4 ExpiryTime:2024-12-05 21:53:21 +0000 UTC Type:0 Mac:52:54:00:92:d9:79 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-383287 Clientid:01:52:54:00:92:d9:79}
	I1205 20:53:33.685264  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined IP address 192.168.72.138 and MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:33.685500  596117 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/bridge-383287/config.json ...
	I1205 20:53:33.685733  596117 start.go:128] duration metric: took 29.246396876s to createHost
	I1205 20:53:33.685761  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHHostname
	I1205 20:53:33.688458  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:33.688824  596117 main.go:141] libmachine: (bridge-383287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:d9:79", ip: ""} in network mk-bridge-383287: {Iface:virbr4 ExpiryTime:2024-12-05 21:53:21 +0000 UTC Type:0 Mac:52:54:00:92:d9:79 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-383287 Clientid:01:52:54:00:92:d9:79}
	I1205 20:53:33.688853  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined IP address 192.168.72.138 and MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:33.689071  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHPort
	I1205 20:53:33.689296  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHKeyPath
	I1205 20:53:33.689456  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHKeyPath
	I1205 20:53:33.689601  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHUsername
	I1205 20:53:33.689758  596117 main.go:141] libmachine: Using SSH client type: native
	I1205 20:53:33.689942  596117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1205 20:53:33.689953  596117 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:53:33.789804  596117 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733432013.766292924
	
	I1205 20:53:33.789843  596117 fix.go:216] guest clock: 1733432013.766292924
	I1205 20:53:33.789855  596117 fix.go:229] Guest: 2024-12-05 20:53:33.766292924 +0000 UTC Remote: 2024-12-05 20:53:33.685747244 +0000 UTC m=+29.391649907 (delta=80.54568ms)
	I1205 20:53:33.789887  596117 fix.go:200] guest clock delta is within tolerance: 80.54568ms
	I1205 20:53:33.789897  596117 start.go:83] releasing machines lock for "bridge-383287", held for 29.350658615s
	I1205 20:53:33.789941  596117 main.go:141] libmachine: (bridge-383287) Calling .DriverName
	I1205 20:53:33.790248  596117 main.go:141] libmachine: (bridge-383287) Calling .GetIP
	I1205 20:53:33.793878  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:33.794268  596117 main.go:141] libmachine: (bridge-383287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:d9:79", ip: ""} in network mk-bridge-383287: {Iface:virbr4 ExpiryTime:2024-12-05 21:53:21 +0000 UTC Type:0 Mac:52:54:00:92:d9:79 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-383287 Clientid:01:52:54:00:92:d9:79}
	I1205 20:53:33.794305  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined IP address 192.168.72.138 and MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:33.794434  596117 main.go:141] libmachine: (bridge-383287) Calling .DriverName
	I1205 20:53:33.795074  596117 main.go:141] libmachine: (bridge-383287) Calling .DriverName
	I1205 20:53:33.795302  596117 main.go:141] libmachine: (bridge-383287) Calling .DriverName
	I1205 20:53:33.795424  596117 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:53:33.795468  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHHostname
	I1205 20:53:33.795586  596117 ssh_runner.go:195] Run: cat /version.json
	I1205 20:53:33.795614  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHHostname
	I1205 20:53:33.798620  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:33.798945  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:33.799051  596117 main.go:141] libmachine: (bridge-383287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:d9:79", ip: ""} in network mk-bridge-383287: {Iface:virbr4 ExpiryTime:2024-12-05 21:53:21 +0000 UTC Type:0 Mac:52:54:00:92:d9:79 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-383287 Clientid:01:52:54:00:92:d9:79}
	I1205 20:53:33.799101  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined IP address 192.168.72.138 and MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:33.799304  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHPort
	I1205 20:53:33.799350  596117 main.go:141] libmachine: (bridge-383287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:d9:79", ip: ""} in network mk-bridge-383287: {Iface:virbr4 ExpiryTime:2024-12-05 21:53:21 +0000 UTC Type:0 Mac:52:54:00:92:d9:79 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-383287 Clientid:01:52:54:00:92:d9:79}
	I1205 20:53:33.799375  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined IP address 192.168.72.138 and MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:33.799490  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHKeyPath
	I1205 20:53:33.799595  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHPort
	I1205 20:53:33.799627  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHUsername
	I1205 20:53:33.799759  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHKeyPath
	I1205 20:53:33.799835  596117 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/bridge-383287/id_rsa Username:docker}
	I1205 20:53:33.799942  596117 main.go:141] libmachine: (bridge-383287) Calling .GetSSHUsername
	I1205 20:53:33.800088  596117 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/bridge-383287/id_rsa Username:docker}
	I1205 20:53:33.904189  596117 ssh_runner.go:195] Run: systemctl --version
	I1205 20:53:33.911368  596117 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:53:34.091373  596117 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:53:34.098263  596117 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:53:34.098351  596117 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:53:34.117297  596117 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:53:34.117341  596117 start.go:495] detecting cgroup driver to use...
	I1205 20:53:34.117415  596117 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:53:34.134551  596117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:53:34.150702  596117 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:53:34.150768  596117 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:53:34.166128  596117 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:53:34.181074  596117 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:53:34.308951  596117 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:53:34.477541  596117 docker.go:233] disabling docker service ...
	I1205 20:53:34.477623  596117 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:53:34.495417  596117 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:53:34.509110  596117 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:53:34.666499  596117 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:53:34.787030  596117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:53:34.802274  596117 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:53:34.824987  596117 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:53:34.825065  596117 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:53:34.837923  596117 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:53:34.838007  596117 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:53:34.852851  596117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:53:34.864615  596117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:53:34.876222  596117 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:53:34.887657  596117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:53:34.899192  596117 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:53:34.919451  596117 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:53:34.930572  596117 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:53:34.940586  596117 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:53:34.940659  596117 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:53:34.954754  596117 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:53:34.965097  596117 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:53:35.098435  596117 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:53:35.195708  596117 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:53:35.195818  596117 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:53:35.201702  596117 start.go:563] Will wait 60s for crictl version
	I1205 20:53:35.201777  596117 ssh_runner.go:195] Run: which crictl
	I1205 20:53:35.207081  596117 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:53:35.258112  596117 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:53:35.258212  596117 ssh_runner.go:195] Run: crio --version
	I1205 20:53:35.299060  596117 ssh_runner.go:195] Run: crio --version
	I1205 20:53:35.338539  596117 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:53:33.793166  597303 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 20:53:33.793404  597303 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:53:33.793452  597303 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:53:33.815192  597303 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35281
	I1205 20:53:33.815742  597303 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:53:33.816517  597303 main.go:141] libmachine: Using API Version  1
	I1205 20:53:33.816548  597303 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:53:33.816955  597303 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:53:33.817194  597303 main.go:141] libmachine: (calico-383287) Calling .GetMachineName
	I1205 20:53:33.817356  597303 main.go:141] libmachine: (calico-383287) Calling .DriverName
	I1205 20:53:33.817537  597303 start.go:159] libmachine.API.Create for "calico-383287" (driver="kvm2")
	I1205 20:53:33.817583  597303 client.go:168] LocalClient.Create starting
	I1205 20:53:33.817624  597303 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem
	I1205 20:53:33.817668  597303 main.go:141] libmachine: Decoding PEM data...
	I1205 20:53:33.817690  597303 main.go:141] libmachine: Parsing certificate...
	I1205 20:53:33.817778  597303 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem
	I1205 20:53:33.817820  597303 main.go:141] libmachine: Decoding PEM data...
	I1205 20:53:33.817839  597303 main.go:141] libmachine: Parsing certificate...
	I1205 20:53:33.817866  597303 main.go:141] libmachine: Running pre-create checks...
	I1205 20:53:33.817880  597303 main.go:141] libmachine: (calico-383287) Calling .PreCreateCheck
	I1205 20:53:33.818341  597303 main.go:141] libmachine: (calico-383287) Calling .GetConfigRaw
	I1205 20:53:33.818910  597303 main.go:141] libmachine: Creating machine...
	I1205 20:53:33.818930  597303 main.go:141] libmachine: (calico-383287) Calling .Create
	I1205 20:53:33.819232  597303 main.go:141] libmachine: (calico-383287) Creating KVM machine...
	I1205 20:53:33.820710  597303 main.go:141] libmachine: (calico-383287) DBG | found existing default KVM network
	I1205 20:53:33.822303  597303 main.go:141] libmachine: (calico-383287) DBG | I1205 20:53:33.822115  597488 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:6a:f0:e3} reservation:<nil>}
	I1205 20:53:33.823404  597303 main.go:141] libmachine: (calico-383287) DBG | I1205 20:53:33.823324  597488 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:82:6c:e0} reservation:<nil>}
	I1205 20:53:33.824744  597303 main.go:141] libmachine: (calico-383287) DBG | I1205 20:53:33.824641  597488 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000310870}
	I1205 20:53:33.824769  597303 main.go:141] libmachine: (calico-383287) DBG | created network xml: 
	I1205 20:53:33.824787  597303 main.go:141] libmachine: (calico-383287) DBG | <network>
	I1205 20:53:33.824796  597303 main.go:141] libmachine: (calico-383287) DBG |   <name>mk-calico-383287</name>
	I1205 20:53:33.824803  597303 main.go:141] libmachine: (calico-383287) DBG |   <dns enable='no'/>
	I1205 20:53:33.824815  597303 main.go:141] libmachine: (calico-383287) DBG |   
	I1205 20:53:33.824829  597303 main.go:141] libmachine: (calico-383287) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1205 20:53:33.824836  597303 main.go:141] libmachine: (calico-383287) DBG |     <dhcp>
	I1205 20:53:33.824842  597303 main.go:141] libmachine: (calico-383287) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1205 20:53:33.824847  597303 main.go:141] libmachine: (calico-383287) DBG |     </dhcp>
	I1205 20:53:33.824853  597303 main.go:141] libmachine: (calico-383287) DBG |   </ip>
	I1205 20:53:33.824864  597303 main.go:141] libmachine: (calico-383287) DBG |   
	I1205 20:53:33.824871  597303 main.go:141] libmachine: (calico-383287) DBG | </network>
	I1205 20:53:33.824882  597303 main.go:141] libmachine: (calico-383287) DBG | 
	I1205 20:53:33.830825  597303 main.go:141] libmachine: (calico-383287) DBG | trying to create private KVM network mk-calico-383287 192.168.61.0/24...
	I1205 20:53:33.912920  597303 main.go:141] libmachine: (calico-383287) DBG | private KVM network mk-calico-383287 192.168.61.0/24 created
	I1205 20:53:33.912988  597303 main.go:141] libmachine: (calico-383287) Setting up store path in /home/jenkins/minikube-integration/20052-530897/.minikube/machines/calico-383287 ...
	I1205 20:53:33.913022  597303 main.go:141] libmachine: (calico-383287) DBG | I1205 20:53:33.912889  597488 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 20:53:33.913040  597303 main.go:141] libmachine: (calico-383287) Building disk image from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 20:53:33.913067  597303 main.go:141] libmachine: (calico-383287) Downloading /home/jenkins/minikube-integration/20052-530897/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 20:53:34.218757  597303 main.go:141] libmachine: (calico-383287) DBG | I1205 20:53:34.218541  597488 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/calico-383287/id_rsa...
	I1205 20:53:34.380418  597303 main.go:141] libmachine: (calico-383287) DBG | I1205 20:53:34.380218  597488 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/calico-383287/calico-383287.rawdisk...
	I1205 20:53:34.380451  597303 main.go:141] libmachine: (calico-383287) DBG | Writing magic tar header
	I1205 20:53:34.380473  597303 main.go:141] libmachine: (calico-383287) DBG | Writing SSH key tar header
	I1205 20:53:34.380485  597303 main.go:141] libmachine: (calico-383287) DBG | I1205 20:53:34.380407  597488 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/calico-383287 ...
	I1205 20:53:34.380506  597303 main.go:141] libmachine: (calico-383287) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/calico-383287
	I1205 20:53:34.380566  597303 main.go:141] libmachine: (calico-383287) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines/calico-383287 (perms=drwx------)
	I1205 20:53:34.380594  597303 main.go:141] libmachine: (calico-383287) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube/machines (perms=drwxr-xr-x)
	I1205 20:53:34.380611  597303 main.go:141] libmachine: (calico-383287) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube/machines
	I1205 20:53:34.380628  597303 main.go:141] libmachine: (calico-383287) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897/.minikube (perms=drwxr-xr-x)
	I1205 20:53:34.380646  597303 main.go:141] libmachine: (calico-383287) Setting executable bit set on /home/jenkins/minikube-integration/20052-530897 (perms=drwxrwxr-x)
	I1205 20:53:34.380673  597303 main.go:141] libmachine: (calico-383287) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 20:53:34.380687  597303 main.go:141] libmachine: (calico-383287) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 20:53:34.380700  597303 main.go:141] libmachine: (calico-383287) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20052-530897
	I1205 20:53:34.380710  597303 main.go:141] libmachine: (calico-383287) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 20:53:34.380737  597303 main.go:141] libmachine: (calico-383287) Creating domain...
	I1205 20:53:34.380790  597303 main.go:141] libmachine: (calico-383287) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 20:53:34.380814  597303 main.go:141] libmachine: (calico-383287) DBG | Checking permissions on dir: /home/jenkins
	I1205 20:53:34.380827  597303 main.go:141] libmachine: (calico-383287) DBG | Checking permissions on dir: /home
	I1205 20:53:34.380839  597303 main.go:141] libmachine: (calico-383287) DBG | Skipping /home - not owner
	I1205 20:53:34.381987  597303 main.go:141] libmachine: (calico-383287) define libvirt domain using xml: 
	I1205 20:53:34.382010  597303 main.go:141] libmachine: (calico-383287) <domain type='kvm'>
	I1205 20:53:34.382030  597303 main.go:141] libmachine: (calico-383287)   <name>calico-383287</name>
	I1205 20:53:34.382045  597303 main.go:141] libmachine: (calico-383287)   <memory unit='MiB'>3072</memory>
	I1205 20:53:34.382055  597303 main.go:141] libmachine: (calico-383287)   <vcpu>2</vcpu>
	I1205 20:53:34.382065  597303 main.go:141] libmachine: (calico-383287)   <features>
	I1205 20:53:34.382073  597303 main.go:141] libmachine: (calico-383287)     <acpi/>
	I1205 20:53:34.382088  597303 main.go:141] libmachine: (calico-383287)     <apic/>
	I1205 20:53:34.382093  597303 main.go:141] libmachine: (calico-383287)     <pae/>
	I1205 20:53:34.382098  597303 main.go:141] libmachine: (calico-383287)     
	I1205 20:53:34.382106  597303 main.go:141] libmachine: (calico-383287)   </features>
	I1205 20:53:34.382110  597303 main.go:141] libmachine: (calico-383287)   <cpu mode='host-passthrough'>
	I1205 20:53:34.382141  597303 main.go:141] libmachine: (calico-383287)   
	I1205 20:53:34.382171  597303 main.go:141] libmachine: (calico-383287)   </cpu>
	I1205 20:53:34.382183  597303 main.go:141] libmachine: (calico-383287)   <os>
	I1205 20:53:34.382192  597303 main.go:141] libmachine: (calico-383287)     <type>hvm</type>
	I1205 20:53:34.382198  597303 main.go:141] libmachine: (calico-383287)     <boot dev='cdrom'/>
	I1205 20:53:34.382208  597303 main.go:141] libmachine: (calico-383287)     <boot dev='hd'/>
	I1205 20:53:34.382239  597303 main.go:141] libmachine: (calico-383287)     <bootmenu enable='no'/>
	I1205 20:53:34.382261  597303 main.go:141] libmachine: (calico-383287)   </os>
	I1205 20:53:34.382270  597303 main.go:141] libmachine: (calico-383287)   <devices>
	I1205 20:53:34.382281  597303 main.go:141] libmachine: (calico-383287)     <disk type='file' device='cdrom'>
	I1205 20:53:34.382293  597303 main.go:141] libmachine: (calico-383287)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/calico-383287/boot2docker.iso'/>
	I1205 20:53:34.382305  597303 main.go:141] libmachine: (calico-383287)       <target dev='hdc' bus='scsi'/>
	I1205 20:53:34.382316  597303 main.go:141] libmachine: (calico-383287)       <readonly/>
	I1205 20:53:34.382337  597303 main.go:141] libmachine: (calico-383287)     </disk>
	I1205 20:53:34.382349  597303 main.go:141] libmachine: (calico-383287)     <disk type='file' device='disk'>
	I1205 20:53:34.382361  597303 main.go:141] libmachine: (calico-383287)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 20:53:34.382375  597303 main.go:141] libmachine: (calico-383287)       <source file='/home/jenkins/minikube-integration/20052-530897/.minikube/machines/calico-383287/calico-383287.rawdisk'/>
	I1205 20:53:34.382386  597303 main.go:141] libmachine: (calico-383287)       <target dev='hda' bus='virtio'/>
	I1205 20:53:34.382396  597303 main.go:141] libmachine: (calico-383287)     </disk>
	I1205 20:53:34.382409  597303 main.go:141] libmachine: (calico-383287)     <interface type='network'>
	I1205 20:53:34.382424  597303 main.go:141] libmachine: (calico-383287)       <source network='mk-calico-383287'/>
	I1205 20:53:34.382443  597303 main.go:141] libmachine: (calico-383287)       <model type='virtio'/>
	I1205 20:53:34.382452  597303 main.go:141] libmachine: (calico-383287)     </interface>
	I1205 20:53:34.382460  597303 main.go:141] libmachine: (calico-383287)     <interface type='network'>
	I1205 20:53:34.382470  597303 main.go:141] libmachine: (calico-383287)       <source network='default'/>
	I1205 20:53:34.382478  597303 main.go:141] libmachine: (calico-383287)       <model type='virtio'/>
	I1205 20:53:34.382488  597303 main.go:141] libmachine: (calico-383287)     </interface>
	I1205 20:53:34.382513  597303 main.go:141] libmachine: (calico-383287)     <serial type='pty'>
	I1205 20:53:34.382533  597303 main.go:141] libmachine: (calico-383287)       <target port='0'/>
	I1205 20:53:34.382552  597303 main.go:141] libmachine: (calico-383287)     </serial>
	I1205 20:53:34.382564  597303 main.go:141] libmachine: (calico-383287)     <console type='pty'>
	I1205 20:53:34.382577  597303 main.go:141] libmachine: (calico-383287)       <target type='serial' port='0'/>
	I1205 20:53:34.382586  597303 main.go:141] libmachine: (calico-383287)     </console>
	I1205 20:53:34.382596  597303 main.go:141] libmachine: (calico-383287)     <rng model='virtio'>
	I1205 20:53:34.382615  597303 main.go:141] libmachine: (calico-383287)       <backend model='random'>/dev/random</backend>
	I1205 20:53:34.382627  597303 main.go:141] libmachine: (calico-383287)     </rng>
	I1205 20:53:34.382635  597303 main.go:141] libmachine: (calico-383287)     
	I1205 20:53:34.382646  597303 main.go:141] libmachine: (calico-383287)     
	I1205 20:53:34.382654  597303 main.go:141] libmachine: (calico-383287)   </devices>
	I1205 20:53:34.382663  597303 main.go:141] libmachine: (calico-383287) </domain>
	I1205 20:53:34.382672  597303 main.go:141] libmachine: (calico-383287) 
	I1205 20:53:34.386931  597303 main.go:141] libmachine: (calico-383287) DBG | domain calico-383287 has defined MAC address 52:54:00:d8:58:77 in network default
	I1205 20:53:34.387717  597303 main.go:141] libmachine: (calico-383287) DBG | domain calico-383287 has defined MAC address 52:54:00:b4:f3:2c in network mk-calico-383287
	I1205 20:53:34.387770  597303 main.go:141] libmachine: (calico-383287) Ensuring networks are active...
	I1205 20:53:34.388671  597303 main.go:141] libmachine: (calico-383287) Ensuring network default is active
	I1205 20:53:34.389057  597303 main.go:141] libmachine: (calico-383287) Ensuring network mk-calico-383287 is active
	I1205 20:53:34.389860  597303 main.go:141] libmachine: (calico-383287) Getting domain xml...
	I1205 20:53:34.390853  597303 main.go:141] libmachine: (calico-383287) Creating domain...
	I1205 20:53:35.813239  597303 main.go:141] libmachine: (calico-383287) Waiting to get IP...
	I1205 20:53:35.814499  597303 main.go:141] libmachine: (calico-383287) DBG | domain calico-383287 has defined MAC address 52:54:00:b4:f3:2c in network mk-calico-383287
	I1205 20:53:35.815264  597303 main.go:141] libmachine: (calico-383287) DBG | unable to find current IP address of domain calico-383287 in network mk-calico-383287
	I1205 20:53:35.815297  597303 main.go:141] libmachine: (calico-383287) DBG | I1205 20:53:35.815210  597488 retry.go:31] will retry after 229.437593ms: waiting for machine to come up
	I1205 20:53:36.046832  597303 main.go:141] libmachine: (calico-383287) DBG | domain calico-383287 has defined MAC address 52:54:00:b4:f3:2c in network mk-calico-383287
	I1205 20:53:36.047553  597303 main.go:141] libmachine: (calico-383287) DBG | unable to find current IP address of domain calico-383287 in network mk-calico-383287
	I1205 20:53:36.047581  597303 main.go:141] libmachine: (calico-383287) DBG | I1205 20:53:36.047528  597488 retry.go:31] will retry after 300.219563ms: waiting for machine to come up
	I1205 20:53:36.349351  597303 main.go:141] libmachine: (calico-383287) DBG | domain calico-383287 has defined MAC address 52:54:00:b4:f3:2c in network mk-calico-383287
	I1205 20:53:36.350000  597303 main.go:141] libmachine: (calico-383287) DBG | unable to find current IP address of domain calico-383287 in network mk-calico-383287
	I1205 20:53:36.350025  597303 main.go:141] libmachine: (calico-383287) DBG | I1205 20:53:36.349964  597488 retry.go:31] will retry after 369.761787ms: waiting for machine to come up
	I1205 20:53:36.721691  597303 main.go:141] libmachine: (calico-383287) DBG | domain calico-383287 has defined MAC address 52:54:00:b4:f3:2c in network mk-calico-383287
	I1205 20:53:36.722316  597303 main.go:141] libmachine: (calico-383287) DBG | unable to find current IP address of domain calico-383287 in network mk-calico-383287
	I1205 20:53:36.722346  597303 main.go:141] libmachine: (calico-383287) DBG | I1205 20:53:36.722228  597488 retry.go:31] will retry after 510.878851ms: waiting for machine to come up
	I1205 20:53:37.234712  597303 main.go:141] libmachine: (calico-383287) DBG | domain calico-383287 has defined MAC address 52:54:00:b4:f3:2c in network mk-calico-383287
	I1205 20:53:37.235390  597303 main.go:141] libmachine: (calico-383287) DBG | unable to find current IP address of domain calico-383287 in network mk-calico-383287
	I1205 20:53:37.235415  597303 main.go:141] libmachine: (calico-383287) DBG | I1205 20:53:37.235336  597488 retry.go:31] will retry after 505.705802ms: waiting for machine to come up
	I1205 20:53:37.743368  597303 main.go:141] libmachine: (calico-383287) DBG | domain calico-383287 has defined MAC address 52:54:00:b4:f3:2c in network mk-calico-383287
	I1205 20:53:37.744743  597303 main.go:141] libmachine: (calico-383287) DBG | unable to find current IP address of domain calico-383287 in network mk-calico-383287
	I1205 20:53:37.744774  597303 main.go:141] libmachine: (calico-383287) DBG | I1205 20:53:37.744681  597488 retry.go:31] will retry after 688.050499ms: waiting for machine to come up
	I1205 20:53:38.434940  597303 main.go:141] libmachine: (calico-383287) DBG | domain calico-383287 has defined MAC address 52:54:00:b4:f3:2c in network mk-calico-383287
	I1205 20:53:38.435416  597303 main.go:141] libmachine: (calico-383287) DBG | unable to find current IP address of domain calico-383287 in network mk-calico-383287
	I1205 20:53:38.435458  597303 main.go:141] libmachine: (calico-383287) DBG | I1205 20:53:38.435367  597488 retry.go:31] will retry after 1.117832867s: waiting for machine to come up
	I1205 20:53:35.340217  596117 main.go:141] libmachine: (bridge-383287) Calling .GetIP
	I1205 20:53:35.343795  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:35.344421  596117 main.go:141] libmachine: (bridge-383287) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:d9:79", ip: ""} in network mk-bridge-383287: {Iface:virbr4 ExpiryTime:2024-12-05 21:53:21 +0000 UTC Type:0 Mac:52:54:00:92:d9:79 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:bridge-383287 Clientid:01:52:54:00:92:d9:79}
	I1205 20:53:35.344447  596117 main.go:141] libmachine: (bridge-383287) DBG | domain bridge-383287 has defined IP address 192.168.72.138 and MAC address 52:54:00:92:d9:79 in network mk-bridge-383287
	I1205 20:53:35.344699  596117 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1205 20:53:35.349478  596117 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:53:35.365843  596117 kubeadm.go:883] updating cluster {Name:bridge-383287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:bridge-383287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:53:35.366002  596117 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:53:35.366064  596117 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:53:35.401772  596117 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 20:53:35.401963  596117 ssh_runner.go:195] Run: which lz4
	I1205 20:53:35.408210  596117 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 20:53:35.412990  596117 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:53:35.413029  596117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 20:53:37.062499  596117 crio.go:462] duration metric: took 1.654305482s to copy over tarball
	I1205 20:53:37.062607  596117 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:53:39.555613  597303 main.go:141] libmachine: (calico-383287) DBG | domain calico-383287 has defined MAC address 52:54:00:b4:f3:2c in network mk-calico-383287
	I1205 20:53:39.556210  597303 main.go:141] libmachine: (calico-383287) DBG | unable to find current IP address of domain calico-383287 in network mk-calico-383287
	I1205 20:53:39.556247  597303 main.go:141] libmachine: (calico-383287) DBG | I1205 20:53:39.556151  597488 retry.go:31] will retry after 896.247805ms: waiting for machine to come up
	I1205 20:53:40.454405  597303 main.go:141] libmachine: (calico-383287) DBG | domain calico-383287 has defined MAC address 52:54:00:b4:f3:2c in network mk-calico-383287
	I1205 20:53:40.454874  597303 main.go:141] libmachine: (calico-383287) DBG | unable to find current IP address of domain calico-383287 in network mk-calico-383287
	I1205 20:53:40.454905  597303 main.go:141] libmachine: (calico-383287) DBG | I1205 20:53:40.454818  597488 retry.go:31] will retry after 1.546810016s: waiting for machine to come up
	I1205 20:53:42.003715  597303 main.go:141] libmachine: (calico-383287) DBG | domain calico-383287 has defined MAC address 52:54:00:b4:f3:2c in network mk-calico-383287
	I1205 20:53:42.004294  597303 main.go:141] libmachine: (calico-383287) DBG | unable to find current IP address of domain calico-383287 in network mk-calico-383287
	I1205 20:53:42.004327  597303 main.go:141] libmachine: (calico-383287) DBG | I1205 20:53:42.004160  597488 retry.go:31] will retry after 1.916981793s: waiting for machine to come up
	I1205 20:53:39.645044  596117 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.5824009s)
	I1205 20:53:39.645075  596117 crio.go:469] duration metric: took 2.582532093s to extract the tarball
	I1205 20:53:39.645084  596117 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:53:39.684462  596117 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:53:39.728123  596117 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:53:39.728155  596117 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:53:39.728167  596117 kubeadm.go:934] updating node { 192.168.72.138 8443 v1.31.2 crio true true} ...
	I1205 20:53:39.728343  596117 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-383287 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.138
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:bridge-383287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1205 20:53:39.728439  596117 ssh_runner.go:195] Run: crio config
	I1205 20:53:39.776797  596117 cni.go:84] Creating CNI manager for "bridge"
	I1205 20:53:39.776826  596117 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:53:39.776858  596117 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.138 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-383287 NodeName:bridge-383287 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.138"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.138 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:53:39.777018  596117 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.138
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-383287"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.138"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.138"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:53:39.777105  596117 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:53:39.789042  596117 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:53:39.789128  596117 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:53:39.801538  596117 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1205 20:53:39.823560  596117 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:53:39.845315  596117 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1205 20:53:39.863649  596117 ssh_runner.go:195] Run: grep 192.168.72.138	control-plane.minikube.internal$ /etc/hosts
	I1205 20:53:39.867924  596117 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.138	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:53:39.882420  596117 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:53:40.040449  596117 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:53:40.063506  596117 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/bridge-383287 for IP: 192.168.72.138
	I1205 20:53:40.063538  596117 certs.go:194] generating shared ca certs ...
	I1205 20:53:40.063562  596117 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:53:40.063763  596117 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:53:40.063855  596117 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:53:40.063881  596117 certs.go:256] generating profile certs ...
	I1205 20:53:40.063961  596117 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/bridge-383287/client.key
	I1205 20:53:40.063991  596117 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/bridge-383287/client.crt with IP's: []
	I1205 20:53:40.281888  596117 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/bridge-383287/client.crt ...
	I1205 20:53:40.281922  596117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/bridge-383287/client.crt: {Name:mkfc9ce3ef94886946d7023131d0f4bf305d6b6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:53:40.282153  596117 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/bridge-383287/client.key ...
	I1205 20:53:40.282171  596117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/bridge-383287/client.key: {Name:mkd571aa7d59705354f0b55e9d61d6777b1bbe63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:53:40.282296  596117 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/bridge-383287/apiserver.key.cf347d23
	I1205 20:53:40.282313  596117 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/bridge-383287/apiserver.crt.cf347d23 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.138]
	I1205 20:53:40.461610  596117 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/bridge-383287/apiserver.crt.cf347d23 ...
	I1205 20:53:40.461642  596117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/bridge-383287/apiserver.crt.cf347d23: {Name:mkfb0d561272c16e83a4e8b1823ad4d7f31b2e1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:53:40.461828  596117 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/bridge-383287/apiserver.key.cf347d23 ...
	I1205 20:53:40.461848  596117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/bridge-383287/apiserver.key.cf347d23: {Name:mkb18d6c859a77013e0b8a8d226450adea2c69bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:53:40.461966  596117 certs.go:381] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/bridge-383287/apiserver.crt.cf347d23 -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/bridge-383287/apiserver.crt
	I1205 20:53:40.462086  596117 certs.go:385] copying /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/bridge-383287/apiserver.key.cf347d23 -> /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/bridge-383287/apiserver.key
	I1205 20:53:40.462155  596117 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/bridge-383287/proxy-client.key
	I1205 20:53:40.462175  596117 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/bridge-383287/proxy-client.crt with IP's: []
	I1205 20:53:40.622040  596117 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/bridge-383287/proxy-client.crt ...
	I1205 20:53:40.622075  596117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/bridge-383287/proxy-client.crt: {Name:mk6dbf953e3ea0f12cc6f7d856b1f0dca0f44156 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:53:40.622293  596117 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/bridge-383287/proxy-client.key ...
	I1205 20:53:40.622313  596117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/bridge-383287/proxy-client.key: {Name:mk377ca6f13f76ce44bb2c994d284e6c591aa5bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:53:40.622517  596117 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:53:40.622562  596117 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:53:40.622574  596117 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:53:40.622595  596117 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:53:40.622616  596117 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:53:40.622639  596117 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:53:40.622737  596117 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:53:40.623344  596117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:53:40.657124  596117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:53:40.689708  596117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:53:40.721269  596117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:53:40.749552  596117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/bridge-383287/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1205 20:53:40.796919  596117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/bridge-383287/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:53:40.827477  596117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/bridge-383287/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:53:40.855629  596117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/bridge-383287/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:53:40.883821  596117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:53:40.919124  596117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:53:40.950409  596117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:53:40.979793  596117 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:53:40.999091  596117 ssh_runner.go:195] Run: openssl version
	I1205 20:53:41.007621  596117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:53:41.022335  596117 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:53:41.028875  596117 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:53:41.028952  596117 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:53:41.035862  596117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:53:41.047570  596117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:53:41.059705  596117 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:53:41.064837  596117 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:53:41.064909  596117 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:53:41.070902  596117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:53:41.083148  596117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:53:41.094866  596117 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:53:41.100740  596117 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:53:41.100820  596117 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:53:41.107842  596117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:53:41.120579  596117 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:53:41.125433  596117 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 20:53:41.125495  596117 kubeadm.go:392] StartCluster: {Name:bridge-383287 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:bridge-383287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:53:41.125610  596117 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:53:41.125675  596117 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:53:41.168879  596117 cri.go:89] found id: ""
	I1205 20:53:41.168957  596117 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:53:41.179930  596117 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:53:41.191339  596117 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:53:41.202724  596117 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:53:41.202747  596117 kubeadm.go:157] found existing configuration files:
	
	I1205 20:53:41.202805  596117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:53:41.212762  596117 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:53:41.212838  596117 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:53:41.223417  596117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:53:41.233121  596117 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:53:41.233203  596117 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:53:41.243529  596117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:53:41.253834  596117 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:53:41.253892  596117 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:53:41.264593  596117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:53:41.276242  596117 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:53:41.276341  596117 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:53:41.287827  596117 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:53:41.466430  596117 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:53:43.922462  597303 main.go:141] libmachine: (calico-383287) DBG | domain calico-383287 has defined MAC address 52:54:00:b4:f3:2c in network mk-calico-383287
	I1205 20:53:43.922979  597303 main.go:141] libmachine: (calico-383287) DBG | unable to find current IP address of domain calico-383287 in network mk-calico-383287
	I1205 20:53:43.923012  597303 main.go:141] libmachine: (calico-383287) DBG | I1205 20:53:43.922878  597488 retry.go:31] will retry after 2.689344427s: waiting for machine to come up
	I1205 20:53:46.614678  597303 main.go:141] libmachine: (calico-383287) DBG | domain calico-383287 has defined MAC address 52:54:00:b4:f3:2c in network mk-calico-383287
	I1205 20:53:46.615279  597303 main.go:141] libmachine: (calico-383287) DBG | unable to find current IP address of domain calico-383287 in network mk-calico-383287
	I1205 20:53:46.615307  597303 main.go:141] libmachine: (calico-383287) DBG | I1205 20:53:46.615205  597488 retry.go:31] will retry after 2.935835811s: waiting for machine to come up
	I1205 20:53:52.092912  596117 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 20:53:52.092998  596117 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:53:52.093069  596117 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:53:52.093219  596117 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:53:52.093356  596117 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 20:53:52.093456  596117 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:53:52.095362  596117 out.go:235]   - Generating certificates and keys ...
	I1205 20:53:52.095460  596117 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:53:52.095547  596117 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:53:52.095634  596117 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 20:53:52.095722  596117 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1205 20:53:52.095790  596117 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1205 20:53:52.095850  596117 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1205 20:53:52.095926  596117 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1205 20:53:52.096096  596117 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-383287 localhost] and IPs [192.168.72.138 127.0.0.1 ::1]
	I1205 20:53:52.096163  596117 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1205 20:53:52.096356  596117 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-383287 localhost] and IPs [192.168.72.138 127.0.0.1 ::1]
	I1205 20:53:52.096416  596117 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 20:53:52.096470  596117 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 20:53:52.096535  596117 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1205 20:53:52.096622  596117 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:53:52.096667  596117 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:53:52.096724  596117 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 20:53:52.096773  596117 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:53:52.096827  596117 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:53:52.096880  596117 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:53:52.096954  596117 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:53:52.097069  596117 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:53:52.099541  596117 out.go:235]   - Booting up control plane ...
	I1205 20:53:52.099670  596117 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:53:52.099816  596117 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:53:52.099937  596117 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:53:52.100052  596117 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:53:52.100156  596117 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:53:52.100224  596117 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:53:52.100409  596117 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 20:53:52.100536  596117 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 20:53:52.100622  596117 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.149841ms
	I1205 20:53:52.100723  596117 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 20:53:52.100793  596117 kubeadm.go:310] [api-check] The API server is healthy after 6.001973384s
	I1205 20:53:52.100925  596117 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:53:52.101117  596117 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:53:52.101197  596117 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:53:52.101457  596117 kubeadm.go:310] [mark-control-plane] Marking the node bridge-383287 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:53:52.101550  596117 kubeadm.go:310] [bootstrap-token] Using token: 3qs417.0cmgpj9qyzspwlwn
	I1205 20:53:52.102991  596117 out.go:235]   - Configuring RBAC rules ...
	I1205 20:53:52.103088  596117 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:53:52.103161  596117 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:53:52.103305  596117 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:53:52.103418  596117 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:53:52.103540  596117 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:53:52.103624  596117 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:53:52.103724  596117 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:53:52.103763  596117 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 20:53:52.103803  596117 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 20:53:52.103819  596117 kubeadm.go:310] 
	I1205 20:53:52.103895  596117 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 20:53:52.103911  596117 kubeadm.go:310] 
	I1205 20:53:52.104013  596117 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 20:53:52.104026  596117 kubeadm.go:310] 
	I1205 20:53:52.104049  596117 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 20:53:52.104102  596117 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:53:52.104168  596117 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:53:52.104177  596117 kubeadm.go:310] 
	I1205 20:53:52.104256  596117 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 20:53:52.104296  596117 kubeadm.go:310] 
	I1205 20:53:52.104365  596117 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:53:52.104374  596117 kubeadm.go:310] 
	I1205 20:53:52.104456  596117 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 20:53:52.104581  596117 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:53:52.104676  596117 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:53:52.104686  596117 kubeadm.go:310] 
	I1205 20:53:52.104760  596117 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:53:52.104826  596117 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 20:53:52.104833  596117 kubeadm.go:310] 
	I1205 20:53:52.104943  596117 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3qs417.0cmgpj9qyzspwlwn \
	I1205 20:53:52.105097  596117 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 \
	I1205 20:53:52.105139  596117 kubeadm.go:310] 	--control-plane 
	I1205 20:53:52.105150  596117 kubeadm.go:310] 
	I1205 20:53:52.105270  596117 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:53:52.105280  596117 kubeadm.go:310] 
	I1205 20:53:52.105379  596117 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3qs417.0cmgpj9qyzspwlwn \
	I1205 20:53:52.105578  596117 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 
	I1205 20:53:52.105607  596117 cni.go:84] Creating CNI manager for "bridge"
	I1205 20:53:52.108095  596117 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Dec 05 20:53:53 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:53:53.802493130Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0ebc2ac9-b9de-46b7-b1a2-a3d23860e423 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:53:53 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:53:53.803516923Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7f7a5a20-2bae-4c47-9fac-48752494410f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:53:53 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:53:53.803985290Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733432033803963698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7f7a5a20-2bae-4c47-9fac-48752494410f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:53:53 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:53:53.804448985Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b5ef357-3a53-407e-b22c-e13ddc86309b name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:53:53 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:53:53.804499541Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b5ef357-3a53-407e-b22c-e13ddc86309b name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:53:53 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:53:53.804802576Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8,PodSandboxId:49a79f66de45cea9e1efa6ed58c8c02967386692415e702a67bf9f5e3a2ba2fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430768054513041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a858ec2-dc10-4501-8efa-72e2ea0c7927,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d97709a18c36bfbbe17081d53a3fbdd5f4224e74eab9eebb89f38d8165bd1e9f,PodSandboxId:e5faf7274a4aada39bfb245947da4bdd772bd370531c3b8927378948371d55d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733430748151041816,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2fbf81a-7842-4591-9538-b64348a8ae02,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f,PodSandboxId:882812447bb3fa1d6f4d1c36bd08e1ea0095036f747002e94a355879fc625a14,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430744816765742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5drgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4adbcbc8-0974-4ed3-90d4-fc7f75ff83b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c,PodSandboxId:49a79f66de45cea9e1efa6ed58c8c02967386692415e702a67bf9f5e3a2ba2fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733430737247261935,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8a858ec2-dc10-4501-8efa-72e2ea0c7927,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43,PodSandboxId:69d443d593a980dd4197e947a91a4ac3c9464456f57e01cde405ad56c6d8b63e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733430737186440332,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5vdcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be2e18fd-6980-45c9-87a4
-f6d1ed31bf7b,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c,PodSandboxId:dee4184f6080c96bc39b7ee74a7ca430a4ad03c8b3cace04ead7a29ce8cef1c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430736908861133,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: f94d808f62dee00726331cbc4b8a924f,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff,PodSandboxId:1d0a1cb74162ffba59947b4c7683a9a397708a3563c9b3294b8288bb1b6b4924,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430728943226856,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00b7c3d53e623508b4ceb58ab9
7a9c81,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d,PodSandboxId:de2ea815e00faafd42e63b4c015b92dc9e561da13780bb50d89de21fa68474e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430717812378517,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9398e84e95c9ee06ddf16a72f8
1b61,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d,PodSandboxId:24da09f1d450b7e911e645bc450250d5fc0aca44a3d319480c9cb9c2bf687079,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430696374080434,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 304018f49d227e222ca00088ccc8b4
5b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66,PodSandboxId:dee4184f6080c96bc39b7ee74a7ca430a4ad03c8b3cace04ead7a29ce8cef1c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733430696373899130,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f94d80
8f62dee00726331cbc4b8a924f,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36,PodSandboxId:de2ea815e00faafd42e63b4c015b92dc9e561da13780bb50d89de21fa68474e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733430696339287264,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9398e84
e95c9ee06ddf16a72f81b61,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5b5ef357-3a53-407e-b22c-e13ddc86309b name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:53:53 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:53:53.851847765Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d7ffda68-efb5-465b-ac86-f6323c8717dd name=/runtime.v1.RuntimeService/Version
	Dec 05 20:53:53 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:53:53.851924137Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d7ffda68-efb5-465b-ac86-f6323c8717dd name=/runtime.v1.RuntimeService/Version
	Dec 05 20:53:53 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:53:53.853250748Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=68c1629a-64bd-410d-8097-b1eac2a12e4e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:53:53 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:53:53.853838175Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733432033853808295,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=68c1629a-64bd-410d-8097-b1eac2a12e4e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:53:53 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:53:53.854551689Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67ba7c6a-e68f-4b9c-bd01-d0f6c3687547 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:53:53 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:53:53.854607643Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67ba7c6a-e68f-4b9c-bd01-d0f6c3687547 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:53:53 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:53:53.854904265Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8,PodSandboxId:49a79f66de45cea9e1efa6ed58c8c02967386692415e702a67bf9f5e3a2ba2fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430768054513041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a858ec2-dc10-4501-8efa-72e2ea0c7927,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d97709a18c36bfbbe17081d53a3fbdd5f4224e74eab9eebb89f38d8165bd1e9f,PodSandboxId:e5faf7274a4aada39bfb245947da4bdd772bd370531c3b8927378948371d55d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733430748151041816,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2fbf81a-7842-4591-9538-b64348a8ae02,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f,PodSandboxId:882812447bb3fa1d6f4d1c36bd08e1ea0095036f747002e94a355879fc625a14,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430744816765742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5drgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4adbcbc8-0974-4ed3-90d4-fc7f75ff83b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c,PodSandboxId:49a79f66de45cea9e1efa6ed58c8c02967386692415e702a67bf9f5e3a2ba2fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733430737247261935,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8a858ec2-dc10-4501-8efa-72e2ea0c7927,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43,PodSandboxId:69d443d593a980dd4197e947a91a4ac3c9464456f57e01cde405ad56c6d8b63e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733430737186440332,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5vdcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be2e18fd-6980-45c9-87a4
-f6d1ed31bf7b,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c,PodSandboxId:dee4184f6080c96bc39b7ee74a7ca430a4ad03c8b3cace04ead7a29ce8cef1c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430736908861133,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: f94d808f62dee00726331cbc4b8a924f,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff,PodSandboxId:1d0a1cb74162ffba59947b4c7683a9a397708a3563c9b3294b8288bb1b6b4924,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430728943226856,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00b7c3d53e623508b4ceb58ab9
7a9c81,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d,PodSandboxId:de2ea815e00faafd42e63b4c015b92dc9e561da13780bb50d89de21fa68474e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430717812378517,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9398e84e95c9ee06ddf16a72f8
1b61,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d,PodSandboxId:24da09f1d450b7e911e645bc450250d5fc0aca44a3d319480c9cb9c2bf687079,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430696374080434,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 304018f49d227e222ca00088ccc8b4
5b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66,PodSandboxId:dee4184f6080c96bc39b7ee74a7ca430a4ad03c8b3cace04ead7a29ce8cef1c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733430696373899130,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f94d80
8f62dee00726331cbc4b8a924f,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36,PodSandboxId:de2ea815e00faafd42e63b4c015b92dc9e561da13780bb50d89de21fa68474e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733430696339287264,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9398e84
e95c9ee06ddf16a72f81b61,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=67ba7c6a-e68f-4b9c-bd01-d0f6c3687547 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:53:53 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:53:53.876637760Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=c3cdedcb-b58c-40e0-9e10-7d5f50db8167 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 05 20:53:53 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:53:53.877052254Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:882812447bb3fa1d6f4d1c36bd08e1ea0095036f747002e94a355879fc625a14,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-5drgc,Uid:4adbcbc8-0974-4ed3-90d4-fc7f75ff83b6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733430744521172145,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-5drgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4adbcbc8-0974-4ed3-90d4-fc7f75ff83b6,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T20:32:16.674064138Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e5faf7274a4aada39bfb245947da4bdd772bd370531c3b8927378948371d55d6,Metadata:&PodSandboxMetadata{Name:busybox,Uid:e2fbf81a-7842-4591-9538-b64348a8ae02,Namespace:default,Attem
pt:0,},State:SANDBOX_READY,CreatedAt:1733430744520061373,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2fbf81a-7842-4591-9538-b64348a8ae02,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T20:32:16.674081287Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a0790a5549f6ee3c2cb5944ec62e4a7b7d45de13acb7420a2c7f06f98aff7447,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-rq8xm,Uid:99b577fd-fbfd-4178-8b06-ef96f118c30b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733430742721189552,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-rq8xm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99b577fd-fbfd-4178-8b06-ef96f118c30b,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05
T20:32:16.674084602Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:49a79f66de45cea9e1efa6ed58c8c02967386692415e702a67bf9f5e3a2ba2fe,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8a858ec2-dc10-4501-8efa-72e2ea0c7927,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733430736989088276,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a858ec2-dc10-4501-8efa-72e2ea0c7927,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"g
cr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-12-05T20:32:16.674079091Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:69d443d593a980dd4197e947a91a4ac3c9464456f57e01cde405ad56c6d8b63e,Metadata:&PodSandboxMetadata{Name:kube-proxy-5vdcq,Uid:be2e18fd-6980-45c9-87a4-f6d1ed31bf7b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733430736987635540,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-5vdcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be2e18fd-6980-45c9-87a4-f6d1ed31bf7b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{ku
bernetes.io/config.seen: 2024-12-05T20:32:16.674075536Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1d0a1cb74162ffba59947b4c7683a9a397708a3563c9b3294b8288bb1b6b4924,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-942599,Uid:00b7c3d53e623508b4ceb58ab97a9c81,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733430728854374925,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00b7c3d53e623508b4ceb58ab97a9c81,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.96:2379,kubernetes.io/config.hash: 00b7c3d53e623508b4ceb58ab97a9c81,kubernetes.io/config.seen: 2024-12-05T20:31:55.646320684Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dee4184f6080c96bc39b7ee74a7ca430a4ad03c8b3cace04ead7a29ce8cef1c7,Metadata:&PodSandboxMetadata{Name:k
ube-controller-manager-default-k8s-diff-port-942599,Uid:f94d808f62dee00726331cbc4b8a924f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733430696174018155,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f94d808f62dee00726331cbc4b8a924f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f94d808f62dee00726331cbc4b8a924f,kubernetes.io/config.seen: 2024-12-05T20:31:35.666341242Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:de2ea815e00faafd42e63b4c015b92dc9e561da13780bb50d89de21fa68474e7,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-942599,Uid:ae9398e84e95c9ee06ddf16a72f81b61,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733430696171908870,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name:
POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9398e84e95c9ee06ddf16a72f81b61,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.96:8444,kubernetes.io/config.hash: ae9398e84e95c9ee06ddf16a72f81b61,kubernetes.io/config.seen: 2024-12-05T20:31:35.666335785Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:24da09f1d450b7e911e645bc450250d5fc0aca44a3d319480c9cb9c2bf687079,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-942599,Uid:304018f49d227e222ca00088ccc8b45b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733430696171321470,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 304018f49d227e222ca00088ccc8b45b,tier: control-pla
ne,},Annotations:map[string]string{kubernetes.io/config.hash: 304018f49d227e222ca00088ccc8b45b,kubernetes.io/config.seen: 2024-12-05T20:31:35.666342612Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=c3cdedcb-b58c-40e0-9e10-7d5f50db8167 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 05 20:53:53 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:53:53.878019766Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f61b039f-82ea-4a68-aca5-966a9d77b040 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:53:53 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:53:53.878081063Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f61b039f-82ea-4a68-aca5-966a9d77b040 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:53:53 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:53:53.878617746Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8,PodSandboxId:49a79f66de45cea9e1efa6ed58c8c02967386692415e702a67bf9f5e3a2ba2fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430768054513041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a858ec2-dc10-4501-8efa-72e2ea0c7927,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d97709a18c36bfbbe17081d53a3fbdd5f4224e74eab9eebb89f38d8165bd1e9f,PodSandboxId:e5faf7274a4aada39bfb245947da4bdd772bd370531c3b8927378948371d55d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733430748151041816,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2fbf81a-7842-4591-9538-b64348a8ae02,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f,PodSandboxId:882812447bb3fa1d6f4d1c36bd08e1ea0095036f747002e94a355879fc625a14,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430744816765742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5drgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4adbcbc8-0974-4ed3-90d4-fc7f75ff83b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c,PodSandboxId:49a79f66de45cea9e1efa6ed58c8c02967386692415e702a67bf9f5e3a2ba2fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733430737247261935,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8a858ec2-dc10-4501-8efa-72e2ea0c7927,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43,PodSandboxId:69d443d593a980dd4197e947a91a4ac3c9464456f57e01cde405ad56c6d8b63e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733430737186440332,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5vdcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be2e18fd-6980-45c9-87a4
-f6d1ed31bf7b,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c,PodSandboxId:dee4184f6080c96bc39b7ee74a7ca430a4ad03c8b3cace04ead7a29ce8cef1c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430736908861133,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: f94d808f62dee00726331cbc4b8a924f,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff,PodSandboxId:1d0a1cb74162ffba59947b4c7683a9a397708a3563c9b3294b8288bb1b6b4924,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430728943226856,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00b7c3d53e623508b4ceb58ab9
7a9c81,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d,PodSandboxId:de2ea815e00faafd42e63b4c015b92dc9e561da13780bb50d89de21fa68474e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430717812378517,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9398e84e95c9ee06ddf16a72f8
1b61,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d,PodSandboxId:24da09f1d450b7e911e645bc450250d5fc0aca44a3d319480c9cb9c2bf687079,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430696374080434,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 304018f49d227e222ca00088ccc8b4
5b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66,PodSandboxId:dee4184f6080c96bc39b7ee74a7ca430a4ad03c8b3cace04ead7a29ce8cef1c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733430696373899130,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f94d80
8f62dee00726331cbc4b8a924f,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36,PodSandboxId:de2ea815e00faafd42e63b4c015b92dc9e561da13780bb50d89de21fa68474e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733430696339287264,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9398e84
e95c9ee06ddf16a72f81b61,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f61b039f-82ea-4a68-aca5-966a9d77b040 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:53:53 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:53:53.899057030Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f3e3b841-e437-4ecb-9581-339d18a962fd name=/runtime.v1.RuntimeService/Version
	Dec 05 20:53:53 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:53:53.899159581Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f3e3b841-e437-4ecb-9581-339d18a962fd name=/runtime.v1.RuntimeService/Version
	Dec 05 20:53:53 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:53:53.901117696Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8fe6a312-78db-44b7-a164-c0345017df86 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:53:53 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:53:53.901806937Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733432033901769257,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8fe6a312-78db-44b7-a164-c0345017df86 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:53:53 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:53:53.902462683Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d4267e13-23a2-495f-ae4e-60f0b83869c8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:53:53 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:53:53.902538567Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d4267e13-23a2-495f-ae4e-60f0b83869c8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:53:53 default-k8s-diff-port-942599 crio[719]: time="2024-12-05 20:53:53.903241281Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8,PodSandboxId:49a79f66de45cea9e1efa6ed58c8c02967386692415e702a67bf9f5e3a2ba2fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430768054513041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a858ec2-dc10-4501-8efa-72e2ea0c7927,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d97709a18c36bfbbe17081d53a3fbdd5f4224e74eab9eebb89f38d8165bd1e9f,PodSandboxId:e5faf7274a4aada39bfb245947da4bdd772bd370531c3b8927378948371d55d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733430748151041816,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2fbf81a-7842-4591-9538-b64348a8ae02,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f,PodSandboxId:882812447bb3fa1d6f4d1c36bd08e1ea0095036f747002e94a355879fc625a14,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430744816765742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5drgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4adbcbc8-0974-4ed3-90d4-fc7f75ff83b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c,PodSandboxId:49a79f66de45cea9e1efa6ed58c8c02967386692415e702a67bf9f5e3a2ba2fe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733430737247261935,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8a858ec2-dc10-4501-8efa-72e2ea0c7927,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43,PodSandboxId:69d443d593a980dd4197e947a91a4ac3c9464456f57e01cde405ad56c6d8b63e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733430737186440332,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5vdcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be2e18fd-6980-45c9-87a4
-f6d1ed31bf7b,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c,PodSandboxId:dee4184f6080c96bc39b7ee74a7ca430a4ad03c8b3cace04ead7a29ce8cef1c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430736908861133,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: f94d808f62dee00726331cbc4b8a924f,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff,PodSandboxId:1d0a1cb74162ffba59947b4c7683a9a397708a3563c9b3294b8288bb1b6b4924,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430728943226856,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00b7c3d53e623508b4ceb58ab9
7a9c81,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d,PodSandboxId:de2ea815e00faafd42e63b4c015b92dc9e561da13780bb50d89de21fa68474e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430717812378517,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9398e84e95c9ee06ddf16a72f8
1b61,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d,PodSandboxId:24da09f1d450b7e911e645bc450250d5fc0aca44a3d319480c9cb9c2bf687079,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430696374080434,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 304018f49d227e222ca00088ccc8b4
5b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66,PodSandboxId:dee4184f6080c96bc39b7ee74a7ca430a4ad03c8b3cace04ead7a29ce8cef1c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733430696373899130,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f94d80
8f62dee00726331cbc4b8a924f,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36,PodSandboxId:de2ea815e00faafd42e63b4c015b92dc9e561da13780bb50d89de21fa68474e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733430696339287264,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-942599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9398e84
e95c9ee06ddf16a72f81b61,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d4267e13-23a2-495f-ae4e-60f0b83869c8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e6ee28be86cb2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Running             storage-provisioner       3                   49a79f66de45c       storage-provisioner
	d97709a18c36b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   e5faf7274a4aa       busybox
	dd7068872d39b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      21 minutes ago      Running             coredns                   1                   882812447bb3f       coredns-7c65d6cfc9-5drgc
	dc7dc19930243       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       2                   49a79f66de45c       storage-provisioner
	444227d730d01       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      21 minutes ago      Running             kube-proxy                1                   69d443d593a98       kube-proxy-5vdcq
	18e899b1e640c       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      21 minutes ago      Running             kube-controller-manager   2                   dee4184f6080c       kube-controller-manager-default-k8s-diff-port-942599
	62b61ec6f08d5       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      21 minutes ago      Running             etcd                      1                   1d0a1cb74162f       etcd-default-k8s-diff-port-942599
	83b7cd17782f8       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      21 minutes ago      Running             kube-apiserver            2                   de2ea815e00fa       kube-apiserver-default-k8s-diff-port-942599
	40accb73a4e91       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      22 minutes ago      Running             kube-scheduler            1                   24da09f1d450b       kube-scheduler-default-k8s-diff-port-942599
	587008b58cfaa       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      22 minutes ago      Exited              kube-controller-manager   1                   dee4184f6080c       kube-controller-manager-default-k8s-diff-port-942599
	e2d9e7ffdd041       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      22 minutes ago      Exited              kube-apiserver            1                   de2ea815e00fa       kube-apiserver-default-k8s-diff-port-942599
	
	
	==> coredns [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52362 - 25366 "HINFO IN 4187734828424423246.5763596893688110444. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018753362s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-942599
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-942599
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=default-k8s-diff-port-942599
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T20_24_33_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 20:24:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-942599
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 20:53:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 20:53:02 +0000   Thu, 05 Dec 2024 20:24:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 20:53:02 +0000   Thu, 05 Dec 2024 20:24:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 20:53:02 +0000   Thu, 05 Dec 2024 20:24:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 20:53:02 +0000   Thu, 05 Dec 2024 20:32:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.96
	  Hostname:    default-k8s-diff-port-942599
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6b52175b9ca4472aab8c7300eafed722
	  System UUID:                6b52175b-9ca4-472a-ab8c-7300eafed722
	  Boot ID:                    02064eb6-f339-407a-83b1-8bd5c5670f78
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-5drgc                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 etcd-default-k8s-diff-port-942599                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-942599             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-942599    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-5vdcq                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-942599             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-rq8xm                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-942599 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-942599 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-942599 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-942599 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-942599 event: Registered Node default-k8s-diff-port-942599 in Controller
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-942599 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-942599 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node default-k8s-diff-port-942599 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-942599 event: Registered Node default-k8s-diff-port-942599 in Controller
	
	
	==> dmesg <==
	[Dec 5 20:31] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055811] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.046796] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.253644] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.881602] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.638676] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.292482] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.061579] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060911] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.214640] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +0.134249] systemd-fstab-generator[680]: Ignoring "noauto" option for root device
	[  +0.330208] systemd-fstab-generator[709]: Ignoring "noauto" option for root device
	[  +4.485003] systemd-fstab-generator[803]: Ignoring "noauto" option for root device
	[  +0.060438] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.014343] systemd-fstab-generator[922]: Ignoring "noauto" option for root device
	[ +14.710623] kauditd_printk_skb: 87 callbacks suppressed
	[Dec 5 20:32] systemd-fstab-generator[1669]: Ignoring "noauto" option for root device
	[  +3.258284] kauditd_printk_skb: 63 callbacks suppressed
	[  +5.431491] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff] <==
	{"level":"info","ts":"2024-12-05T20:50:21.406078Z","caller":"traceutil/trace.go:171","msg":"trace[1537974302] transaction","detail":"{read_only:false; response_revision:1498; number_of_response:1; }","duration":"380.267769ms","start":"2024-12-05T20:50:21.025769Z","end":"2024-12-05T20:50:21.406037Z","steps":["trace[1537974302] 'process raft request'  (duration: 379.638001ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T20:50:21.407576Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T20:50:21.025755Z","time spent":"381.354159ms","remote":"127.0.0.1:48418","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1496 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-12-05T20:51:20.328730Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":12177332725746341421,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-12-05T20:51:20.495574Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"644.678103ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T20:51:20.498122Z","caller":"traceutil/trace.go:171","msg":"trace[101529266] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1543; }","duration":"647.245723ms","start":"2024-12-05T20:51:19.850827Z","end":"2024-12-05T20:51:20.498073Z","steps":["trace[101529266] 'range keys from in-memory index tree'  (duration: 644.66044ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T20:51:20.497212Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.204691ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12177332725746341423 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-dkyjhcehqt5liaocdrwsziqvbe\" mod_revision:1535 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-dkyjhcehqt5liaocdrwsziqvbe\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-dkyjhcehqt5liaocdrwsziqvbe\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-12-05T20:51:20.498269Z","caller":"traceutil/trace.go:171","msg":"trace[5413028] linearizableReadLoop","detail":"{readStateIndex:1819; appliedIndex:1818; }","duration":"670.050632ms","start":"2024-12-05T20:51:19.828210Z","end":"2024-12-05T20:51:20.498261Z","steps":["trace[5413028] 'read index received'  (duration: 536.345517ms)","trace[5413028] 'applied index is now lower than readState.Index'  (duration: 133.704081ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-05T20:51:20.498415Z","caller":"traceutil/trace.go:171","msg":"trace[1087023068] transaction","detail":"{read_only:false; response_revision:1544; number_of_response:1; }","duration":"766.300649ms","start":"2024-12-05T20:51:19.732073Z","end":"2024-12-05T20:51:20.498373Z","steps":["trace[1087023068] 'process raft request'  (duration: 632.522781ms)","trace[1087023068] 'compare'  (duration: 131.959976ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-05T20:51:20.498521Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T20:51:19.732056Z","time spent":"766.405202ms","remote":"127.0.0.1:48526","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":693,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-dkyjhcehqt5liaocdrwsziqvbe\" mod_revision:1535 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-dkyjhcehqt5liaocdrwsziqvbe\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-dkyjhcehqt5liaocdrwsziqvbe\" > >"}
	{"level":"warn","ts":"2024-12-05T20:51:20.498829Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"670.633247ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T20:51:20.498883Z","caller":"traceutil/trace.go:171","msg":"trace[352972185] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1544; }","duration":"670.692359ms","start":"2024-12-05T20:51:19.828181Z","end":"2024-12-05T20:51:20.498874Z","steps":["trace[352972185] 'agreement among raft nodes before linearized reading'  (duration: 670.6131ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T20:51:20.498911Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T20:51:19.828140Z","time spent":"670.763822ms","remote":"127.0.0.1:48444","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-12-05T20:51:20.499159Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"627.249042ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-12-05T20:51:20.499212Z","caller":"traceutil/trace.go:171","msg":"trace[1958841188] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; response_count:0; response_revision:1544; }","duration":"627.30818ms","start":"2024-12-05T20:51:19.871895Z","end":"2024-12-05T20:51:20.499203Z","steps":["trace[1958841188] 'agreement among raft nodes before linearized reading'  (duration: 627.22617ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T20:51:20.499238Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T20:51:19.871862Z","time spent":"627.368589ms","remote":"127.0.0.1:48604","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":2,"response size":30,"request content":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true "}
	{"level":"warn","ts":"2024-12-05T20:51:20.757820Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.930752ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12177332725746341426 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-942599\" mod_revision:1537 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-942599\" value_size:532 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-942599\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-12-05T20:51:20.757934Z","caller":"traceutil/trace.go:171","msg":"trace[1990100050] transaction","detail":"{read_only:false; response_revision:1545; number_of_response:1; }","duration":"258.550691ms","start":"2024-12-05T20:51:20.499367Z","end":"2024-12-05T20:51:20.757918Z","steps":["trace[1990100050] 'process raft request'  (duration: 126.447235ms)","trace[1990100050] 'compare'  (duration: 131.73361ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-05T20:51:46.163464Z","caller":"traceutil/trace.go:171","msg":"trace[1598451295] transaction","detail":"{read_only:false; response_revision:1564; number_of_response:1; }","duration":"211.829691ms","start":"2024-12-05T20:51:45.951612Z","end":"2024-12-05T20:51:46.163441Z","steps":["trace[1598451295] 'process raft request'  (duration: 211.733864ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T20:51:48.322612Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.345279ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T20:51:48.322839Z","caller":"traceutil/trace.go:171","msg":"trace[1155578504] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1566; }","duration":"114.583737ms","start":"2024-12-05T20:51:48.208240Z","end":"2024-12-05T20:51:48.322823Z","steps":["trace[1155578504] 'range keys from in-memory index tree'  (duration: 114.266774ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T20:52:10.437757Z","caller":"traceutil/trace.go:171","msg":"trace[378389430] transaction","detail":"{read_only:false; response_revision:1583; number_of_response:1; }","duration":"121.259869ms","start":"2024-12-05T20:52:10.316480Z","end":"2024-12-05T20:52:10.437740Z","steps":["trace[378389430] 'process raft request'  (duration: 120.800577ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T20:52:14.580635Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1344}
	{"level":"info","ts":"2024-12-05T20:52:14.585161Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1344,"took":"4.164192ms","hash":1081060070,"current-db-size-bytes":2707456,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1585152,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-12-05T20:52:14.585202Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1081060070,"revision":1344,"compact-revision":1102}
	{"level":"info","ts":"2024-12-05T20:52:38.768012Z","caller":"traceutil/trace.go:171","msg":"trace[377059878] transaction","detail":"{read_only:false; response_revision:1608; number_of_response:1; }","duration":"174.654253ms","start":"2024-12-05T20:52:38.593344Z","end":"2024-12-05T20:52:38.767998Z","steps":["trace[377059878] 'process raft request'  (duration: 174.567727ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:53:54 up 22 min,  0 users,  load average: 0.09, 0.09, 0.09
	Linux default-k8s-diff-port-942599 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d] <==
	I1205 20:50:17.121749       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 20:50:17.122252       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 20:52:16.121623       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 20:52:16.121854       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1205 20:52:17.123916       1 handler_proxy.go:99] no RequestInfo found in the context
	W1205 20:52:17.124024       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 20:52:17.124133       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1205 20:52:17.124159       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1205 20:52:17.125465       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 20:52:17.125592       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 20:53:17.125996       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 20:53:17.126128       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1205 20:53:17.126022       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 20:53:17.126235       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1205 20:53:17.127603       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 20:53:17.127648       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36] <==
	I1205 20:31:36.801930       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1205 20:31:37.580134       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:37.580296       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1205 20:31:37.580379       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I1205 20:31:37.584309       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1205 20:31:37.587959       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1205 20:31:37.588044       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1205 20:31:37.588283       1 instance.go:232] Using reconciler: lease
	W1205 20:31:37.589512       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:38.580972       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:38.580994       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:38.590030       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:39.877297       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:40.027038       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:40.130312       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:42.059538       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:42.857959       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:43.077850       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:46.474142       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:46.477812       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:46.504215       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:51.854440       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:52.797914       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:31:53.778314       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F1205 20:31:57.589920       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c] <==
	I1205 20:48:45.738993       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="421.003µs"
	E1205 20:48:50.339143       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:48:50.870965       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 20:48:56.728732       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="262.746µs"
	E1205 20:49:20.346878       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:49:20.878629       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:49:50.353280       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:49:50.893015       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:50:20.359857       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:50:20.901130       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:50:50.367843       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:50:50.910002       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:51:20.375521       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:51:20.920979       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:51:50.383359       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:51:50.930502       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:52:20.391282       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:52:20.937294       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:52:50.399539       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:52:50.948255       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 20:53:02.796452       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-942599"
	E1205 20:53:20.406258       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:53:20.955216       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:53:50.413339       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:53:50.965253       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-controller-manager [587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66] <==
	I1205 20:31:37.332191       1 serving.go:386] Generated self-signed cert in-memory
	I1205 20:31:37.781459       1 controllermanager.go:197] "Starting" version="v1.31.2"
	I1205 20:31:37.781551       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:31:37.783214       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1205 20:31:37.783420       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1205 20:31:37.783455       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1205 20:31:37.783466       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1205 20:32:16.001632       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 20:32:17.655633       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1205 20:32:17.672264       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.96"]
	E1205 20:32:17.672358       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 20:32:17.736488       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 20:32:17.736543       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 20:32:17.736592       1 server_linux.go:169] "Using iptables Proxier"
	I1205 20:32:17.741584       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 20:32:17.742158       1 server.go:483] "Version info" version="v1.31.2"
	I1205 20:32:17.742193       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:32:17.744961       1 config.go:199] "Starting service config controller"
	I1205 20:32:17.745053       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 20:32:17.745135       1 config.go:105] "Starting endpoint slice config controller"
	I1205 20:32:17.745157       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 20:32:17.746428       1 config.go:328] "Starting node config controller"
	I1205 20:32:17.746458       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 20:32:17.881352       1 shared_informer.go:320] Caches are synced for node config
	I1205 20:32:17.892549       1 shared_informer.go:320] Caches are synced for service config
	I1205 20:32:17.900782       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d] <==
	W1205 20:32:16.012392       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 20:32:16.012938       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1205 20:32:16.013153       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 20:32:16.013197       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:32:16.013270       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 20:32:16.013287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:32:16.014838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 20:32:16.014897       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:32:16.014973       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 20:32:16.015005       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:32:16.015110       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 20:32:16.015145       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:32:16.015275       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 20:32:16.015318       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:32:16.015384       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 20:32:16.015415       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:32:16.015483       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1205 20:32:16.015770       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:32:16.015854       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 20:32:16.015887       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1205 20:32:16.015926       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1205 20:32:16.015956       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:32:16.022121       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 20:32:16.031952       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1205 20:32:17.703822       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 05 20:52:47 default-k8s-diff-port-942599 kubelet[929]: E1205 20:52:47.712495     929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rq8xm" podUID="99b577fd-fbfd-4178-8b06-ef96f118c30b"
	Dec 05 20:52:56 default-k8s-diff-port-942599 kubelet[929]: E1205 20:52:56.065605     929 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431976065107984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:52:56 default-k8s-diff-port-942599 kubelet[929]: E1205 20:52:56.066133     929 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431976065107984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:52:59 default-k8s-diff-port-942599 kubelet[929]: E1205 20:52:59.712733     929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rq8xm" podUID="99b577fd-fbfd-4178-8b06-ef96f118c30b"
	Dec 05 20:53:06 default-k8s-diff-port-942599 kubelet[929]: E1205 20:53:06.068803     929 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431986068218094,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:53:06 default-k8s-diff-port-942599 kubelet[929]: E1205 20:53:06.069266     929 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431986068218094,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:53:12 default-k8s-diff-port-942599 kubelet[929]: E1205 20:53:12.711446     929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rq8xm" podUID="99b577fd-fbfd-4178-8b06-ef96f118c30b"
	Dec 05 20:53:16 default-k8s-diff-port-942599 kubelet[929]: E1205 20:53:16.071984     929 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431996071420191,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:53:16 default-k8s-diff-port-942599 kubelet[929]: E1205 20:53:16.072034     929 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431996071420191,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:53:26 default-k8s-diff-port-942599 kubelet[929]: E1205 20:53:26.073920     929 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733432006073491933,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:53:26 default-k8s-diff-port-942599 kubelet[929]: E1205 20:53:26.073964     929 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733432006073491933,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:53:26 default-k8s-diff-port-942599 kubelet[929]: E1205 20:53:26.711742     929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rq8xm" podUID="99b577fd-fbfd-4178-8b06-ef96f118c30b"
	Dec 05 20:53:35 default-k8s-diff-port-942599 kubelet[929]: E1205 20:53:35.751461     929 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 05 20:53:35 default-k8s-diff-port-942599 kubelet[929]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 05 20:53:35 default-k8s-diff-port-942599 kubelet[929]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 20:53:35 default-k8s-diff-port-942599 kubelet[929]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 20:53:35 default-k8s-diff-port-942599 kubelet[929]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 20:53:36 default-k8s-diff-port-942599 kubelet[929]: E1205 20:53:36.078579     929 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733432016077745716,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:53:36 default-k8s-diff-port-942599 kubelet[929]: E1205 20:53:36.078611     929 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733432016077745716,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:53:40 default-k8s-diff-port-942599 kubelet[929]: E1205 20:53:40.730230     929 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 05 20:53:40 default-k8s-diff-port-942599 kubelet[929]: E1205 20:53:40.730348     929 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 05 20:53:40 default-k8s-diff-port-942599 kubelet[929]: E1205 20:53:40.730646     929 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-npslt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPr
opagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:
nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-rq8xm_kube-system(99b577fd-fbfd-4178-8b06-ef96f118c30b): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Dec 05 20:53:40 default-k8s-diff-port-942599 kubelet[929]: E1205 20:53:40.732483     929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-rq8xm" podUID="99b577fd-fbfd-4178-8b06-ef96f118c30b"
	Dec 05 20:53:46 default-k8s-diff-port-942599 kubelet[929]: E1205 20:53:46.081004     929 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733432026080545741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:53:46 default-k8s-diff-port-942599 kubelet[929]: E1205 20:53:46.081433     929 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733432026080545741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c] <==
	I1205 20:32:17.568449       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1205 20:32:47.574458       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8] <==
	I1205 20:32:48.152193       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 20:32:48.165620       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 20:32:48.165819       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 20:33:05.570406       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 20:33:05.571267       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-942599_356d0fb9-7c51-4de0-b490-dd4f2f392b16!
	I1205 20:33:05.574264       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a1be7b79-7151-4907-8d26-e24030f7bb58", APIVersion:"v1", ResourceVersion:"639", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-942599_356d0fb9-7c51-4de0-b490-dd4f2f392b16 became leader
	I1205 20:33:05.672523       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-942599_356d0fb9-7c51-4de0-b490-dd4f2f392b16!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-942599 -n default-k8s-diff-port-942599
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-942599 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-rq8xm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-942599 describe pod metrics-server-6867b74b74-rq8xm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-942599 describe pod metrics-server-6867b74b74-rq8xm: exit status 1 (71.518243ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-rq8xm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-942599 describe pod metrics-server-6867b74b74-rq8xm: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (488.69s)
E1205 20:54:59.271953  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (285.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-816185 -n no-preload-816185
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-12-05 20:51:03.458771477 +0000 UTC m=+6555.664391810
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-816185 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-816185 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.266µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-816185 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-816185 -n no-preload-816185
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-816185 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-816185 logs -n 25: (1.301001316s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-816185                                   | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-789000            | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-789000                                  | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-315387                              | cert-expiration-315387       | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-315387                              | cert-expiration-315387       | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	| delete  | -p                                                     | disable-driver-mounts-242147 | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	|         | disable-driver-mounts-242147                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:25 UTC |
	|         | default-k8s-diff-port-942599                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-386085        | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-942599  | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC | 05 Dec 24 20:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC |                     |
	|         | default-k8s-diff-port-942599                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-816185                  | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-789000                 | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-816185                                   | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC | 05 Dec 24 20:37 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-789000                                  | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC | 05 Dec 24 20:35 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-386085                              | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-386085             | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-386085                              | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-942599       | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:28 UTC | 05 Dec 24 20:36 UTC |
	|         | default-k8s-diff-port-942599                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-386085                              | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:49 UTC | 05 Dec 24 20:49 UTC |
	| start   | -p newest-cni-024411 --memory=2200 --alsologtostderr   | newest-cni-024411            | jenkins | v1.34.0 | 05 Dec 24 20:49 UTC | 05 Dec 24 20:50 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-024411             | newest-cni-024411            | jenkins | v1.34.0 | 05 Dec 24 20:50 UTC | 05 Dec 24 20:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-024411                                   | newest-cni-024411            | jenkins | v1.34.0 | 05 Dec 24 20:50 UTC | 05 Dec 24 20:50 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-024411                  | newest-cni-024411            | jenkins | v1.34.0 | 05 Dec 24 20:50 UTC | 05 Dec 24 20:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-024411 --memory=2200 --alsologtostderr   | newest-cni-024411            | jenkins | v1.34.0 | 05 Dec 24 20:50 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 20:50:49
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:50:49.237504  592315 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:50:49.237789  592315 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:50:49.237799  592315 out.go:358] Setting ErrFile to fd 2...
	I1205 20:50:49.237802  592315 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:50:49.237968  592315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 20:50:49.238550  592315 out.go:352] Setting JSON to false
	I1205 20:50:49.239620  592315 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":12795,"bootTime":1733419054,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:50:49.239740  592315 start.go:139] virtualization: kvm guest
	I1205 20:50:49.241912  592315 out.go:177] * [newest-cni-024411] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:50:49.243246  592315 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 20:50:49.243288  592315 notify.go:220] Checking for updates...
	I1205 20:50:49.245940  592315 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:50:49.247249  592315 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:50:49.248493  592315 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 20:50:49.249895  592315 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:50:49.251214  592315 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:50:49.253032  592315 config.go:182] Loaded profile config "newest-cni-024411": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:50:49.253423  592315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:50:49.253481  592315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:50:49.269005  592315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35791
	I1205 20:50:49.269574  592315 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:50:49.270190  592315 main.go:141] libmachine: Using API Version  1
	I1205 20:50:49.270210  592315 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:50:49.270574  592315 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:50:49.270749  592315 main.go:141] libmachine: (newest-cni-024411) Calling .DriverName
	I1205 20:50:49.270988  592315 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:50:49.271309  592315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:50:49.271350  592315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:50:49.288164  592315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34111
	I1205 20:50:49.288771  592315 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:50:49.289257  592315 main.go:141] libmachine: Using API Version  1
	I1205 20:50:49.289284  592315 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:50:49.289618  592315 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:50:49.289789  592315 main.go:141] libmachine: (newest-cni-024411) Calling .DriverName
	I1205 20:50:49.326147  592315 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 20:50:49.327380  592315 start.go:297] selected driver: kvm2
	I1205 20:50:49.327395  592315 start.go:901] validating driver "kvm2" against &{Name:newest-cni-024411 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:newest-cni-024411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:50:49.327525  592315 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:50:49.328200  592315 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:50:49.328332  592315 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20052-530897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:50:49.344492  592315 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 20:50:49.344922  592315 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 20:50:49.344954  592315 cni.go:84] Creating CNI manager for ""
	I1205 20:50:49.345000  592315 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:50:49.345035  592315 start.go:340] cluster config:
	{Name:newest-cni-024411 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-024411 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:50:49.345169  592315 iso.go:125] acquiring lock: {Name:mk778929df466edaca8cb6d38427acedfae32b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:50:49.348042  592315 out.go:177] * Starting "newest-cni-024411" primary control-plane node in "newest-cni-024411" cluster
	I1205 20:50:49.349669  592315 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:50:49.349727  592315 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 20:50:49.349741  592315 cache.go:56] Caching tarball of preloaded images
	I1205 20:50:49.349845  592315 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:50:49.349860  592315 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 20:50:49.349991  592315 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/newest-cni-024411/config.json ...
	I1205 20:50:49.350197  592315 start.go:360] acquireMachinesLock for newest-cni-024411: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:50:49.350249  592315 start.go:364] duration metric: took 29.727µs to acquireMachinesLock for "newest-cni-024411"
	I1205 20:50:49.350271  592315 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:50:49.350290  592315 fix.go:54] fixHost starting: 
	I1205 20:50:49.350551  592315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:50:49.350591  592315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:50:49.366103  592315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46733
	I1205 20:50:49.366627  592315 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:50:49.367133  592315 main.go:141] libmachine: Using API Version  1
	I1205 20:50:49.367154  592315 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:50:49.367540  592315 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:50:49.367749  592315 main.go:141] libmachine: (newest-cni-024411) Calling .DriverName
	I1205 20:50:49.367902  592315 main.go:141] libmachine: (newest-cni-024411) Calling .GetState
	I1205 20:50:49.369360  592315 fix.go:112] recreateIfNeeded on newest-cni-024411: state=Stopped err=<nil>
	I1205 20:50:49.369390  592315 main.go:141] libmachine: (newest-cni-024411) Calling .DriverName
	W1205 20:50:49.369553  592315 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 20:50:49.372388  592315 out.go:177] * Restarting existing kvm2 VM for "newest-cni-024411" ...
	I1205 20:50:49.373784  592315 main.go:141] libmachine: (newest-cni-024411) Calling .Start
	I1205 20:50:49.374001  592315 main.go:141] libmachine: (newest-cni-024411) Ensuring networks are active...
	I1205 20:50:49.374729  592315 main.go:141] libmachine: (newest-cni-024411) Ensuring network default is active
	I1205 20:50:49.375095  592315 main.go:141] libmachine: (newest-cni-024411) Ensuring network mk-newest-cni-024411 is active
	I1205 20:50:49.375505  592315 main.go:141] libmachine: (newest-cni-024411) Getting domain xml...
	I1205 20:50:49.376347  592315 main.go:141] libmachine: (newest-cni-024411) Creating domain...
	I1205 20:50:50.659787  592315 main.go:141] libmachine: (newest-cni-024411) Waiting to get IP...
	I1205 20:50:50.660730  592315 main.go:141] libmachine: (newest-cni-024411) DBG | domain newest-cni-024411 has defined MAC address 52:54:00:19:91:f4 in network mk-newest-cni-024411
	I1205 20:50:50.661263  592315 main.go:141] libmachine: (newest-cni-024411) DBG | unable to find current IP address of domain newest-cni-024411 in network mk-newest-cni-024411
	I1205 20:50:50.661332  592315 main.go:141] libmachine: (newest-cni-024411) DBG | I1205 20:50:50.661220  592350 retry.go:31] will retry after 191.198079ms: waiting for machine to come up
	I1205 20:50:50.853758  592315 main.go:141] libmachine: (newest-cni-024411) DBG | domain newest-cni-024411 has defined MAC address 52:54:00:19:91:f4 in network mk-newest-cni-024411
	I1205 20:50:50.854349  592315 main.go:141] libmachine: (newest-cni-024411) DBG | unable to find current IP address of domain newest-cni-024411 in network mk-newest-cni-024411
	I1205 20:50:50.854378  592315 main.go:141] libmachine: (newest-cni-024411) DBG | I1205 20:50:50.854271  592350 retry.go:31] will retry after 282.919608ms: waiting for machine to come up
	I1205 20:50:51.139047  592315 main.go:141] libmachine: (newest-cni-024411) DBG | domain newest-cni-024411 has defined MAC address 52:54:00:19:91:f4 in network mk-newest-cni-024411
	I1205 20:50:51.139576  592315 main.go:141] libmachine: (newest-cni-024411) DBG | unable to find current IP address of domain newest-cni-024411 in network mk-newest-cni-024411
	I1205 20:50:51.139604  592315 main.go:141] libmachine: (newest-cni-024411) DBG | I1205 20:50:51.139519  592350 retry.go:31] will retry after 332.046265ms: waiting for machine to come up
	I1205 20:50:51.473076  592315 main.go:141] libmachine: (newest-cni-024411) DBG | domain newest-cni-024411 has defined MAC address 52:54:00:19:91:f4 in network mk-newest-cni-024411
	I1205 20:50:51.473543  592315 main.go:141] libmachine: (newest-cni-024411) DBG | unable to find current IP address of domain newest-cni-024411 in network mk-newest-cni-024411
	I1205 20:50:51.473574  592315 main.go:141] libmachine: (newest-cni-024411) DBG | I1205 20:50:51.473490  592350 retry.go:31] will retry after 565.797968ms: waiting for machine to come up
	I1205 20:50:52.041385  592315 main.go:141] libmachine: (newest-cni-024411) DBG | domain newest-cni-024411 has defined MAC address 52:54:00:19:91:f4 in network mk-newest-cni-024411
	I1205 20:50:52.041989  592315 main.go:141] libmachine: (newest-cni-024411) DBG | unable to find current IP address of domain newest-cni-024411 in network mk-newest-cni-024411
	I1205 20:50:52.042031  592315 main.go:141] libmachine: (newest-cni-024411) DBG | I1205 20:50:52.041919  592350 retry.go:31] will retry after 538.283378ms: waiting for machine to come up
	I1205 20:50:52.581502  592315 main.go:141] libmachine: (newest-cni-024411) DBG | domain newest-cni-024411 has defined MAC address 52:54:00:19:91:f4 in network mk-newest-cni-024411
	I1205 20:50:52.581932  592315 main.go:141] libmachine: (newest-cni-024411) DBG | unable to find current IP address of domain newest-cni-024411 in network mk-newest-cni-024411
	I1205 20:50:52.581956  592315 main.go:141] libmachine: (newest-cni-024411) DBG | I1205 20:50:52.581885  592350 retry.go:31] will retry after 919.519219ms: waiting for machine to come up
	I1205 20:50:53.503002  592315 main.go:141] libmachine: (newest-cni-024411) DBG | domain newest-cni-024411 has defined MAC address 52:54:00:19:91:f4 in network mk-newest-cni-024411
	I1205 20:50:53.503491  592315 main.go:141] libmachine: (newest-cni-024411) DBG | unable to find current IP address of domain newest-cni-024411 in network mk-newest-cni-024411
	I1205 20:50:53.503517  592315 main.go:141] libmachine: (newest-cni-024411) DBG | I1205 20:50:53.503450  592350 retry.go:31] will retry after 1.056067443s: waiting for machine to come up
	I1205 20:50:54.560863  592315 main.go:141] libmachine: (newest-cni-024411) DBG | domain newest-cni-024411 has defined MAC address 52:54:00:19:91:f4 in network mk-newest-cni-024411
	I1205 20:50:54.561389  592315 main.go:141] libmachine: (newest-cni-024411) DBG | unable to find current IP address of domain newest-cni-024411 in network mk-newest-cni-024411
	I1205 20:50:54.561442  592315 main.go:141] libmachine: (newest-cni-024411) DBG | I1205 20:50:54.561341  592350 retry.go:31] will retry after 930.904776ms: waiting for machine to come up
	I1205 20:50:55.493475  592315 main.go:141] libmachine: (newest-cni-024411) DBG | domain newest-cni-024411 has defined MAC address 52:54:00:19:91:f4 in network mk-newest-cni-024411
	I1205 20:50:55.493982  592315 main.go:141] libmachine: (newest-cni-024411) DBG | unable to find current IP address of domain newest-cni-024411 in network mk-newest-cni-024411
	I1205 20:50:55.494018  592315 main.go:141] libmachine: (newest-cni-024411) DBG | I1205 20:50:55.493927  592350 retry.go:31] will retry after 1.384254559s: waiting for machine to come up
	I1205 20:50:56.880643  592315 main.go:141] libmachine: (newest-cni-024411) DBG | domain newest-cni-024411 has defined MAC address 52:54:00:19:91:f4 in network mk-newest-cni-024411
	I1205 20:50:56.881200  592315 main.go:141] libmachine: (newest-cni-024411) DBG | unable to find current IP address of domain newest-cni-024411 in network mk-newest-cni-024411
	I1205 20:50:56.881235  592315 main.go:141] libmachine: (newest-cni-024411) DBG | I1205 20:50:56.881133  592350 retry.go:31] will retry after 1.491024456s: waiting for machine to come up
	I1205 20:50:58.374071  592315 main.go:141] libmachine: (newest-cni-024411) DBG | domain newest-cni-024411 has defined MAC address 52:54:00:19:91:f4 in network mk-newest-cni-024411
	I1205 20:50:58.374612  592315 main.go:141] libmachine: (newest-cni-024411) DBG | unable to find current IP address of domain newest-cni-024411 in network mk-newest-cni-024411
	I1205 20:50:58.374637  592315 main.go:141] libmachine: (newest-cni-024411) DBG | I1205 20:50:58.374569  592350 retry.go:31] will retry after 1.877616164s: waiting for machine to come up
	
	
	==> CRI-O <==
	Dec 05 20:51:04 no-preload-816185 crio[716]: time="2024-12-05 20:51:04.164515925Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a1a07fe4-52e7-4dbd-a000-7d30d3b8ca80 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:51:04 no-preload-816185 crio[716]: time="2024-12-05 20:51:04.165720516Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bcd146d3-4ac1-4108-a5cd-5afcfaee019f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:51:04 no-preload-816185 crio[716]: time="2024-12-05 20:51:04.166288260Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431864166260926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bcd146d3-4ac1-4108-a5cd-5afcfaee019f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:51:04 no-preload-816185 crio[716]: time="2024-12-05 20:51:04.166999130Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c0b9a83-f257-4847-bd17-379d8bfc2b26 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:51:04 no-preload-816185 crio[716]: time="2024-12-05 20:51:04.167070711Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c0b9a83-f257-4847-bd17-379d8bfc2b26 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:51:04 no-preload-816185 crio[716]: time="2024-12-05 20:51:04.167320480Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92c0f24978e39c68485fd113a97875a68cbb55f2f341d2ed2baf5b273a694d58,PodSandboxId:931006d780b13a9308d6d2327c9b419e91bffeb3de9e5935cbe02b0851d15e4e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733431027656385078,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f33e249-9330-428f-8feb-9f3cf44369be,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ae48ce56e049c518439e1342b38f928063d63a4a96344c4ef2c1bcca644865,PodSandboxId:53a99c02594aa0576b861bf3e66787ace2d67583a1f434a0f7d096ec8b3759d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733431027162024700,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gmc2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bfc0f96-5ad3-42c7-ab2c-4a29cbeab20f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d06fcfd39b3fbd6c769eaefecb47afeb543173243258fcf76a17d219da4f2ad6,PodSandboxId:66f4764fbff897d16e10a89a56a33bde2486c315a561e8d9085731c7739aee88,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733431027074517693,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fmcnh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb
6a91c8-af65-4fb6-af77-0a6c45d224a7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6376a359c82bf49229c09bb6cdcea5e1f9805707a7170dc09f462f3387283518,PodSandboxId:9d9ec7600f03c33be405dc2c489181ae902a5cfceea04e1efa0dd1cb864461b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733431026478333480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q8thq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be5b50a-e564-4d80-82c4-357db41a3c1e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618e7986042ae61241a2e82d3d6c8bcefb90838bb7c71055021020555e0b3299,PodSandboxId:35edf78f25c164581f5fe53dc04477d4f7b5f86a809070c06c6e8a6195e80344,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733431015687776718,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8680bb881144cc526df7f123fe0e95,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4174cf957b5e115c096629210df81f40bc1558a5af8b9dd0145a6eff3e4be3f9,PodSandboxId:84131f77c9d95245453f83e0492a9caf58e571c574c74cdcfabf14f033e3d065,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733431015698339930,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa5f7ec329fe85df7db1b6e2f2e8ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7450815c261e8981559e06736aa54abfbc808b74e77d643fb144e294aa664284,PodSandboxId:b7277a915edeb0280426b492ba4ac082dfb03f2c3487f931267e7922d51923e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733431015670986045,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506e81d12c5f83cd43b2eff2f0c3d34c,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161f5440479a346e1b4482f9f909e116f60da19890fd7b1635ef87164a1978fa,PodSandboxId:e91f3f0c1dbac00ec563b1a5bee614262901ddf614869a170c805a03422d225e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733431015562416640,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e666b83de89497cad0416a7019a3f69,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecd63676c708063caf41eb906794e14fa58acf17026504bce946f0a33f379e64,PodSandboxId:ea53df1bd26635b77439dfd8964fe32893903a6a261115b69cea74ec25ab65ac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733430727274991857,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8680bb881144cc526df7f123fe0e95,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c0b9a83-f257-4847-bd17-379d8bfc2b26 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:51:04 no-preload-816185 crio[716]: time="2024-12-05 20:51:04.208457859Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c36018c3-491f-4846-b9ff-43035168d91f name=/runtime.v1.RuntimeService/Version
	Dec 05 20:51:04 no-preload-816185 crio[716]: time="2024-12-05 20:51:04.208551008Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c36018c3-491f-4846-b9ff-43035168d91f name=/runtime.v1.RuntimeService/Version
	Dec 05 20:51:04 no-preload-816185 crio[716]: time="2024-12-05 20:51:04.210155178Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=87a43182-da65-4d13-893a-bb361b0e7db6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:51:04 no-preload-816185 crio[716]: time="2024-12-05 20:51:04.210515170Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431864210493307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87a43182-da65-4d13-893a-bb361b0e7db6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:51:04 no-preload-816185 crio[716]: time="2024-12-05 20:51:04.211363172Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b642a260-d4c7-4e88-8b8c-96d0a6b06653 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:51:04 no-preload-816185 crio[716]: time="2024-12-05 20:51:04.211464649Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b642a260-d4c7-4e88-8b8c-96d0a6b06653 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:51:04 no-preload-816185 crio[716]: time="2024-12-05 20:51:04.211728111Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92c0f24978e39c68485fd113a97875a68cbb55f2f341d2ed2baf5b273a694d58,PodSandboxId:931006d780b13a9308d6d2327c9b419e91bffeb3de9e5935cbe02b0851d15e4e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733431027656385078,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f33e249-9330-428f-8feb-9f3cf44369be,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ae48ce56e049c518439e1342b38f928063d63a4a96344c4ef2c1bcca644865,PodSandboxId:53a99c02594aa0576b861bf3e66787ace2d67583a1f434a0f7d096ec8b3759d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733431027162024700,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gmc2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bfc0f96-5ad3-42c7-ab2c-4a29cbeab20f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d06fcfd39b3fbd6c769eaefecb47afeb543173243258fcf76a17d219da4f2ad6,PodSandboxId:66f4764fbff897d16e10a89a56a33bde2486c315a561e8d9085731c7739aee88,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733431027074517693,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fmcnh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb
6a91c8-af65-4fb6-af77-0a6c45d224a7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6376a359c82bf49229c09bb6cdcea5e1f9805707a7170dc09f462f3387283518,PodSandboxId:9d9ec7600f03c33be405dc2c489181ae902a5cfceea04e1efa0dd1cb864461b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733431026478333480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q8thq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be5b50a-e564-4d80-82c4-357db41a3c1e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618e7986042ae61241a2e82d3d6c8bcefb90838bb7c71055021020555e0b3299,PodSandboxId:35edf78f25c164581f5fe53dc04477d4f7b5f86a809070c06c6e8a6195e80344,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733431015687776718,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8680bb881144cc526df7f123fe0e95,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4174cf957b5e115c096629210df81f40bc1558a5af8b9dd0145a6eff3e4be3f9,PodSandboxId:84131f77c9d95245453f83e0492a9caf58e571c574c74cdcfabf14f033e3d065,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733431015698339930,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa5f7ec329fe85df7db1b6e2f2e8ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7450815c261e8981559e06736aa54abfbc808b74e77d643fb144e294aa664284,PodSandboxId:b7277a915edeb0280426b492ba4ac082dfb03f2c3487f931267e7922d51923e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733431015670986045,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506e81d12c5f83cd43b2eff2f0c3d34c,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161f5440479a346e1b4482f9f909e116f60da19890fd7b1635ef87164a1978fa,PodSandboxId:e91f3f0c1dbac00ec563b1a5bee614262901ddf614869a170c805a03422d225e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733431015562416640,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e666b83de89497cad0416a7019a3f69,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecd63676c708063caf41eb906794e14fa58acf17026504bce946f0a33f379e64,PodSandboxId:ea53df1bd26635b77439dfd8964fe32893903a6a261115b69cea74ec25ab65ac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733430727274991857,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8680bb881144cc526df7f123fe0e95,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b642a260-d4c7-4e88-8b8c-96d0a6b06653 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:51:04 no-preload-816185 crio[716]: time="2024-12-05 20:51:04.238277288Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=b4d8fb30-ab82-4caf-a757-de44adc62661 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 05 20:51:04 no-preload-816185 crio[716]: time="2024-12-05 20:51:04.238553623Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:931006d780b13a9308d6d2327c9b419e91bffeb3de9e5935cbe02b0851d15e4e,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7f33e249-9330-428f-8feb-9f3cf44369be,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733431027322408407,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f33e249-9330-428f-8feb-9f3cf44369be,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-
system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-12-05T20:37:07.001695584Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:33f534c874be20f01361921642bf978866141dfa6b2ce262c522ea2f7a906676,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-8vmd6,Uid:d838e6e3-bd74-4653-9289-4f5375b03d4f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733431027268410625,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-8vmd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d838e6e3-bd74-4653-9289-4f5375b03d4f
,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T20:37:06.948262142Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:53a99c02594aa0576b861bf3e66787ace2d67583a1f434a0f7d096ec8b3759d4,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-gmc2j,Uid:2bfc0f96-5ad3-42c7-ab2c-4a29cbeab20f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733431026384037191,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-gmc2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bfc0f96-5ad3-42c7-ab2c-4a29cbeab20f,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T20:37:06.076394890Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:66f4764fbff897d16e10a89a56a33bde2486c315a561e8d9085731c7739aee88,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-fmcnh,Uid:fb6a91c8-af65-4fb6-
af77-0a6c45d224a7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733431026308392649,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-fmcnh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb6a91c8-af65-4fb6-af77-0a6c45d224a7,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T20:37:05.999357503Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9d9ec7600f03c33be405dc2c489181ae902a5cfceea04e1efa0dd1cb864461b7,Metadata:&PodSandboxMetadata{Name:kube-proxy-q8thq,Uid:8be5b50a-e564-4d80-82c4-357db41a3c1e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733431026251394143,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-q8thq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be5b50a-e564-4d80-82c4-357db41a3c1e,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T20:37:05.912759343Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:35edf78f25c164581f5fe53dc04477d4f7b5f86a809070c06c6e8a6195e80344,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-816185,Uid:da8680bb881144cc526df7f123fe0e95,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1733431015391792298,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8680bb881144cc526df7f123fe0e95,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.37:8443,kubernetes.io/config.hash: da8680bb881144cc526df7f123fe0e95,kubernetes.io/config.seen: 2024-12-05T20:36:54.928636544Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b7277a915edeb0280426b492ba4ac082
dfb03f2c3487f931267e7922d51923e9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-816185,Uid:506e81d12c5f83cd43b2eff2f0c3d34c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733431015390344854,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506e81d12c5f83cd43b2eff2f0c3d34c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 506e81d12c5f83cd43b2eff2f0c3d34c,kubernetes.io/config.seen: 2024-12-05T20:36:54.928638547Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:84131f77c9d95245453f83e0492a9caf58e571c574c74cdcfabf14f033e3d065,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-816185,Uid:9aa5f7ec329fe85df7db1b6e2f2e8ca6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733431015380220784,Labels:map[string]string{component: kube-sche
duler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa5f7ec329fe85df7db1b6e2f2e8ca6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9aa5f7ec329fe85df7db1b6e2f2e8ca6,kubernetes.io/config.seen: 2024-12-05T20:36:54.928640184Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e91f3f0c1dbac00ec563b1a5bee614262901ddf614869a170c805a03422d225e,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-816185,Uid:3e666b83de89497cad0416a7019a3f69,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733431015379675552,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e666b83de89497cad0416a7019a3f69,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.37:2379,
kubernetes.io/config.hash: 3e666b83de89497cad0416a7019a3f69,kubernetes.io/config.seen: 2024-12-05T20:36:54.928631672Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ea53df1bd26635b77439dfd8964fe32893903a6a261115b69cea74ec25ab65ac,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-816185,Uid:da8680bb881144cc526df7f123fe0e95,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1733430727125599629,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8680bb881144cc526df7f123fe0e95,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.37:8443,kubernetes.io/config.hash: da8680bb881144cc526df7f123fe0e95,kubernetes.io/config.seen: 2024-12-05T20:32:06.607025424Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/intercep
tors.go:74" id=b4d8fb30-ab82-4caf-a757-de44adc62661 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 05 20:51:04 no-preload-816185 crio[716]: time="2024-12-05 20:51:04.239469677Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=182cfd84-b909-4e33-bb22-ec84f227f6ff name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:51:04 no-preload-816185 crio[716]: time="2024-12-05 20:51:04.239548695Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=182cfd84-b909-4e33-bb22-ec84f227f6ff name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:51:04 no-preload-816185 crio[716]: time="2024-12-05 20:51:04.239740349Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92c0f24978e39c68485fd113a97875a68cbb55f2f341d2ed2baf5b273a694d58,PodSandboxId:931006d780b13a9308d6d2327c9b419e91bffeb3de9e5935cbe02b0851d15e4e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733431027656385078,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f33e249-9330-428f-8feb-9f3cf44369be,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ae48ce56e049c518439e1342b38f928063d63a4a96344c4ef2c1bcca644865,PodSandboxId:53a99c02594aa0576b861bf3e66787ace2d67583a1f434a0f7d096ec8b3759d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733431027162024700,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gmc2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bfc0f96-5ad3-42c7-ab2c-4a29cbeab20f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d06fcfd39b3fbd6c769eaefecb47afeb543173243258fcf76a17d219da4f2ad6,PodSandboxId:66f4764fbff897d16e10a89a56a33bde2486c315a561e8d9085731c7739aee88,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733431027074517693,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fmcnh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb
6a91c8-af65-4fb6-af77-0a6c45d224a7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6376a359c82bf49229c09bb6cdcea5e1f9805707a7170dc09f462f3387283518,PodSandboxId:9d9ec7600f03c33be405dc2c489181ae902a5cfceea04e1efa0dd1cb864461b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733431026478333480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q8thq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be5b50a-e564-4d80-82c4-357db41a3c1e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618e7986042ae61241a2e82d3d6c8bcefb90838bb7c71055021020555e0b3299,PodSandboxId:35edf78f25c164581f5fe53dc04477d4f7b5f86a809070c06c6e8a6195e80344,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733431015687776718,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8680bb881144cc526df7f123fe0e95,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4174cf957b5e115c096629210df81f40bc1558a5af8b9dd0145a6eff3e4be3f9,PodSandboxId:84131f77c9d95245453f83e0492a9caf58e571c574c74cdcfabf14f033e3d065,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733431015698339930,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa5f7ec329fe85df7db1b6e2f2e8ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7450815c261e8981559e06736aa54abfbc808b74e77d643fb144e294aa664284,PodSandboxId:b7277a915edeb0280426b492ba4ac082dfb03f2c3487f931267e7922d51923e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733431015670986045,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506e81d12c5f83cd43b2eff2f0c3d34c,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161f5440479a346e1b4482f9f909e116f60da19890fd7b1635ef87164a1978fa,PodSandboxId:e91f3f0c1dbac00ec563b1a5bee614262901ddf614869a170c805a03422d225e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733431015562416640,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e666b83de89497cad0416a7019a3f69,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecd63676c708063caf41eb906794e14fa58acf17026504bce946f0a33f379e64,PodSandboxId:ea53df1bd26635b77439dfd8964fe32893903a6a261115b69cea74ec25ab65ac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733430727274991857,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8680bb881144cc526df7f123fe0e95,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=182cfd84-b909-4e33-bb22-ec84f227f6ff name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:51:04 no-preload-816185 crio[716]: time="2024-12-05 20:51:04.250473505Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3f600938-28d2-4311-b3d0-2f3be88416d4 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:51:04 no-preload-816185 crio[716]: time="2024-12-05 20:51:04.250563734Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3f600938-28d2-4311-b3d0-2f3be88416d4 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:51:04 no-preload-816185 crio[716]: time="2024-12-05 20:51:04.251919083Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b67a4a06-9b99-4a50-bd20-a9ce8a094e39 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:51:04 no-preload-816185 crio[716]: time="2024-12-05 20:51:04.252241147Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431864252219320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b67a4a06-9b99-4a50-bd20-a9ce8a094e39 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:51:04 no-preload-816185 crio[716]: time="2024-12-05 20:51:04.252672491Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=72142c9e-ee62-4011-9510-752bed987e6f name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:51:04 no-preload-816185 crio[716]: time="2024-12-05 20:51:04.252745489Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=72142c9e-ee62-4011-9510-752bed987e6f name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:51:04 no-preload-816185 crio[716]: time="2024-12-05 20:51:04.253037707Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92c0f24978e39c68485fd113a97875a68cbb55f2f341d2ed2baf5b273a694d58,PodSandboxId:931006d780b13a9308d6d2327c9b419e91bffeb3de9e5935cbe02b0851d15e4e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733431027656385078,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f33e249-9330-428f-8feb-9f3cf44369be,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ae48ce56e049c518439e1342b38f928063d63a4a96344c4ef2c1bcca644865,PodSandboxId:53a99c02594aa0576b861bf3e66787ace2d67583a1f434a0f7d096ec8b3759d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733431027162024700,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gmc2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bfc0f96-5ad3-42c7-ab2c-4a29cbeab20f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d06fcfd39b3fbd6c769eaefecb47afeb543173243258fcf76a17d219da4f2ad6,PodSandboxId:66f4764fbff897d16e10a89a56a33bde2486c315a561e8d9085731c7739aee88,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733431027074517693,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fmcnh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb
6a91c8-af65-4fb6-af77-0a6c45d224a7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6376a359c82bf49229c09bb6cdcea5e1f9805707a7170dc09f462f3387283518,PodSandboxId:9d9ec7600f03c33be405dc2c489181ae902a5cfceea04e1efa0dd1cb864461b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733431026478333480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q8thq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be5b50a-e564-4d80-82c4-357db41a3c1e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618e7986042ae61241a2e82d3d6c8bcefb90838bb7c71055021020555e0b3299,PodSandboxId:35edf78f25c164581f5fe53dc04477d4f7b5f86a809070c06c6e8a6195e80344,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733431015687776718,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8680bb881144cc526df7f123fe0e95,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4174cf957b5e115c096629210df81f40bc1558a5af8b9dd0145a6eff3e4be3f9,PodSandboxId:84131f77c9d95245453f83e0492a9caf58e571c574c74cdcfabf14f033e3d065,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733431015698339930,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa5f7ec329fe85df7db1b6e2f2e8ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7450815c261e8981559e06736aa54abfbc808b74e77d643fb144e294aa664284,PodSandboxId:b7277a915edeb0280426b492ba4ac082dfb03f2c3487f931267e7922d51923e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733431015670986045,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506e81d12c5f83cd43b2eff2f0c3d34c,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161f5440479a346e1b4482f9f909e116f60da19890fd7b1635ef87164a1978fa,PodSandboxId:e91f3f0c1dbac00ec563b1a5bee614262901ddf614869a170c805a03422d225e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733431015562416640,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e666b83de89497cad0416a7019a3f69,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecd63676c708063caf41eb906794e14fa58acf17026504bce946f0a33f379e64,PodSandboxId:ea53df1bd26635b77439dfd8964fe32893903a6a261115b69cea74ec25ab65ac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733430727274991857,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-816185,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8680bb881144cc526df7f123fe0e95,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=72142c9e-ee62-4011-9510-752bed987e6f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	92c0f24978e39       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   931006d780b13       storage-provisioner
	f4ae48ce56e04       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   13 minutes ago      Running             coredns                   0                   53a99c02594aa       coredns-7c65d6cfc9-gmc2j
	d06fcfd39b3fb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   13 minutes ago      Running             coredns                   0                   66f4764fbff89       coredns-7c65d6cfc9-fmcnh
	6376a359c82bf       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   13 minutes ago      Running             kube-proxy                0                   9d9ec7600f03c       kube-proxy-q8thq
	4174cf957b5e1       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   14 minutes ago      Running             kube-scheduler            2                   84131f77c9d95       kube-scheduler-no-preload-816185
	618e7986042ae       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   14 minutes ago      Running             kube-apiserver            2                   35edf78f25c16       kube-apiserver-no-preload-816185
	7450815c261e8       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   14 minutes ago      Running             kube-controller-manager   2                   b7277a915edeb       kube-controller-manager-no-preload-816185
	161f5440479a3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   14 minutes ago      Running             etcd                      2                   e91f3f0c1dbac       etcd-no-preload-816185
	ecd63676c7080       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   18 minutes ago      Exited              kube-apiserver            1                   ea53df1bd2663       kube-apiserver-no-preload-816185
	
	
	==> coredns [d06fcfd39b3fbd6c769eaefecb47afeb543173243258fcf76a17d219da4f2ad6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [f4ae48ce56e049c518439e1342b38f928063d63a4a96344c4ef2c1bcca644865] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-816185
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-816185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=no-preload-816185
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T20_37_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 20:36:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-816185
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 20:50:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 20:47:22 +0000   Thu, 05 Dec 2024 20:36:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 20:47:22 +0000   Thu, 05 Dec 2024 20:36:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 20:47:22 +0000   Thu, 05 Dec 2024 20:36:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 20:47:22 +0000   Thu, 05 Dec 2024 20:36:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.37
	  Hostname:    no-preload-816185
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2460e8cea62d4fb59d491e8972590e87
	  System UUID:                2460e8ce-a62d-4fb5-9d49-1e8972590e87
	  Boot ID:                    0830dec6-1ea9-489d-962f-e22d48911390
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-fmcnh                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-7c65d6cfc9-gmc2j                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-no-preload-816185                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kube-apiserver-no-preload-816185             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-no-preload-816185    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-q8thq                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-no-preload-816185             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-6867b74b74-8vmd6              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         13m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node no-preload-816185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node no-preload-816185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node no-preload-816185 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m   node-controller  Node no-preload-816185 event: Registered Node no-preload-816185 in Controller
	
	
	==> dmesg <==
	[  +0.045078] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.207107] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.892115] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.643364] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.014661] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.059569] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064310] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.194812] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.149612] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.293826] systemd-fstab-generator[707]: Ignoring "noauto" option for root device
	[Dec 5 20:32] systemd-fstab-generator[1317]: Ignoring "noauto" option for root device
	[  +0.061754] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.805619] systemd-fstab-generator[1438]: Ignoring "noauto" option for root device
	[  +4.499754] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.046674] kauditd_printk_skb: 79 callbacks suppressed
	[ +27.038300] kauditd_printk_skb: 6 callbacks suppressed
	[Dec 5 20:36] kauditd_printk_skb: 4 callbacks suppressed
	[ +11.712641] systemd-fstab-generator[3136]: Ignoring "noauto" option for root device
	[  +6.074315] systemd-fstab-generator[3465]: Ignoring "noauto" option for root device
	[Dec 5 20:37] kauditd_printk_skb: 56 callbacks suppressed
	[  +4.801087] systemd-fstab-generator[3589]: Ignoring "noauto" option for root device
	[  +0.853578] kauditd_printk_skb: 36 callbacks suppressed
	[  +7.370342] kauditd_printk_skb: 62 callbacks suppressed
	
	
	==> etcd [161f5440479a346e1b4482f9f909e116f60da19890fd7b1635ef87164a1978fa] <==
	{"level":"info","ts":"2024-12-05T20:36:56.049876Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.37:2380"}
	{"level":"info","ts":"2024-12-05T20:36:56.483904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32539c5013f3ec41 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-05T20:36:56.483998Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32539c5013f3ec41 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-05T20:36:56.484049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32539c5013f3ec41 received MsgPreVoteResp from 32539c5013f3ec41 at term 1"}
	{"level":"info","ts":"2024-12-05T20:36:56.484085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32539c5013f3ec41 became candidate at term 2"}
	{"level":"info","ts":"2024-12-05T20:36:56.484109Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32539c5013f3ec41 received MsgVoteResp from 32539c5013f3ec41 at term 2"}
	{"level":"info","ts":"2024-12-05T20:36:56.484136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32539c5013f3ec41 became leader at term 2"}
	{"level":"info","ts":"2024-12-05T20:36:56.484162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 32539c5013f3ec41 elected leader 32539c5013f3ec41 at term 2"}
	{"level":"info","ts":"2024-12-05T20:36:56.488117Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"32539c5013f3ec41","local-member-attributes":"{Name:no-preload-816185 ClientURLs:[https://192.168.61.37:2379]}","request-path":"/0/members/32539c5013f3ec41/attributes","cluster-id":"ee6bec4ef8ef7744","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-05T20:36:56.488338Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T20:36:56.493936Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T20:36:56.494372Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T20:36:56.496122Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-05T20:36:56.499015Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-05T20:36:56.498528Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T20:36:56.499781Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.37:2379"}
	{"level":"info","ts":"2024-12-05T20:36:56.500117Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ee6bec4ef8ef7744","local-member-id":"32539c5013f3ec41","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T20:36:56.505170Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T20:36:56.505272Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T20:36:56.512336Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T20:36:56.517085Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-05T20:46:56.566990Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":689}
	{"level":"info","ts":"2024-12-05T20:46:56.580149Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":689,"took":"11.804473ms","hash":2472120335,"current-db-size-bytes":2179072,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2179072,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-12-05T20:46:56.580263Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2472120335,"revision":689,"compact-revision":-1}
	{"level":"info","ts":"2024-12-05T20:50:04.361946Z","caller":"traceutil/trace.go:171","msg":"trace[175350263] transaction","detail":"{read_only:false; response_revision:1087; number_of_response:1; }","duration":"122.003662ms","start":"2024-12-05T20:50:04.239885Z","end":"2024-12-05T20:50:04.361888Z","steps":["trace[175350263] 'process raft request'  (duration: 121.796391ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:51:04 up 19 min,  0 users,  load average: 0.56, 0.24, 0.16
	Linux no-preload-816185 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [618e7986042ae61241a2e82d3d6c8bcefb90838bb7c71055021020555e0b3299] <==
	W1205 20:46:59.163758       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 20:46:59.163951       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1205 20:46:59.165181       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 20:46:59.165293       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 20:47:59.165434       1 handler_proxy.go:99] no RequestInfo found in the context
	W1205 20:47:59.165519       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 20:47:59.165697       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1205 20:47:59.165644       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1205 20:47:59.167796       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 20:47:59.167864       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 20:49:59.168558       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 20:49:59.168963       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1205 20:49:59.168584       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 20:49:59.169176       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1205 20:49:59.170384       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 20:49:59.170470       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [ecd63676c708063caf41eb906794e14fa58acf17026504bce946f0a33f379e64] <==
	W1205 20:36:48.209017       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:50.576909       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:50.947091       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:51.454426       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:51.578238       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:51.786247       1 logging.go:55] [core] [Channel #16 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:51.799935       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:51.983349       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:52.083885       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:52.280212       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:52.302166       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:52.376576       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:52.394316       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:52.410109       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:52.423938       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:52.488368       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:52.794747       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:52.814263       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:52.898555       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:53.002731       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:53.018683       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:53.020093       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:53.036037       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:53.093178       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 20:36:53.095656       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [7450815c261e8981559e06736aa54abfbc808b74e77d643fb144e294aa664284] <==
	E1205 20:45:35.202709       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:45:35.691099       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:46:05.209030       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:46:05.701074       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:46:35.217644       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:46:35.709933       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:47:05.225301       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:47:05.725025       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 20:47:22.607356       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-816185"
	E1205 20:47:35.233033       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:47:35.734957       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 20:47:57.938495       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="1.150133ms"
	E1205 20:48:05.239890       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:48:05.745184       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 20:48:12.932676       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="81.697µs"
	E1205 20:48:35.247205       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:48:35.756021       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:49:05.254174       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:49:05.764919       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:49:35.261258       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:49:35.773528       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:50:05.268035       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:50:05.781155       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 20:50:35.275254       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 20:50:35.792147       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [6376a359c82bf49229c09bb6cdcea5e1f9805707a7170dc09f462f3387283518] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 20:37:07.331870       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1205 20:37:07.368405       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.37"]
	E1205 20:37:07.376336       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 20:37:07.582178       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 20:37:07.582221       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 20:37:07.582253       1 server_linux.go:169] "Using iptables Proxier"
	I1205 20:37:07.585614       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 20:37:07.586133       1 server.go:483] "Version info" version="v1.31.2"
	I1205 20:37:07.586342       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:37:07.587739       1 config.go:199] "Starting service config controller"
	I1205 20:37:07.587798       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 20:37:07.587947       1 config.go:105] "Starting endpoint slice config controller"
	I1205 20:37:07.588043       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 20:37:07.588561       1 config.go:328] "Starting node config controller"
	I1205 20:37:07.588633       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 20:37:07.688491       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 20:37:07.688678       1 shared_informer.go:320] Caches are synced for service config
	I1205 20:37:07.688692       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4174cf957b5e115c096629210df81f40bc1558a5af8b9dd0145a6eff3e4be3f9] <==
	W1205 20:36:58.211088       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 20:36:58.212788       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:36:59.112415       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 20:36:59.112484       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1205 20:36:59.176506       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 20:36:59.176565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:36:59.288043       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 20:36:59.288101       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:36:59.303975       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 20:36:59.304027       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:36:59.336035       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 20:36:59.336089       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:36:59.340999       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 20:36:59.341049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:36:59.473292       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1205 20:36:59.473390       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1205 20:36:59.479286       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1205 20:36:59.479376       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:36:59.497573       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1205 20:36:59.498277       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:36:59.541581       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 20:36:59.542482       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:36:59.565563       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 20:36:59.565691       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1205 20:37:01.696505       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 05 20:50:00 no-preload-816185 kubelet[3472]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 20:50:01 no-preload-816185 kubelet[3472]: E1205 20:50:01.104344    3472 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431801103718750,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:50:01 no-preload-816185 kubelet[3472]: E1205 20:50:01.104381    3472 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431801103718750,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:50:07 no-preload-816185 kubelet[3472]: E1205 20:50:07.914720    3472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vmd6" podUID="d838e6e3-bd74-4653-9289-4f5375b03d4f"
	Dec 05 20:50:11 no-preload-816185 kubelet[3472]: E1205 20:50:11.108881    3472 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431811106112867,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:50:11 no-preload-816185 kubelet[3472]: E1205 20:50:11.109434    3472 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431811106112867,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:50:19 no-preload-816185 kubelet[3472]: E1205 20:50:19.914446    3472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vmd6" podUID="d838e6e3-bd74-4653-9289-4f5375b03d4f"
	Dec 05 20:50:21 no-preload-816185 kubelet[3472]: E1205 20:50:21.113758    3472 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431821113036755,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:50:21 no-preload-816185 kubelet[3472]: E1205 20:50:21.113868    3472 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431821113036755,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:50:31 no-preload-816185 kubelet[3472]: E1205 20:50:31.115238    3472 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431831114879569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:50:31 no-preload-816185 kubelet[3472]: E1205 20:50:31.115290    3472 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431831114879569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:50:34 no-preload-816185 kubelet[3472]: E1205 20:50:34.915402    3472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vmd6" podUID="d838e6e3-bd74-4653-9289-4f5375b03d4f"
	Dec 05 20:50:41 no-preload-816185 kubelet[3472]: E1205 20:50:41.116512    3472 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431841116185452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:50:41 no-preload-816185 kubelet[3472]: E1205 20:50:41.116542    3472 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431841116185452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:50:49 no-preload-816185 kubelet[3472]: E1205 20:50:49.915322    3472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vmd6" podUID="d838e6e3-bd74-4653-9289-4f5375b03d4f"
	Dec 05 20:50:51 no-preload-816185 kubelet[3472]: E1205 20:50:51.120625    3472 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431851119185803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:50:51 no-preload-816185 kubelet[3472]: E1205 20:50:51.121255    3472 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431851119185803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:51:00 no-preload-816185 kubelet[3472]: E1205 20:51:00.947095    3472 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 05 20:51:00 no-preload-816185 kubelet[3472]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 05 20:51:00 no-preload-816185 kubelet[3472]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 20:51:00 no-preload-816185 kubelet[3472]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 20:51:00 no-preload-816185 kubelet[3472]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 20:51:01 no-preload-816185 kubelet[3472]: E1205 20:51:01.125092    3472 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431861124455403,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:51:01 no-preload-816185 kubelet[3472]: E1205 20:51:01.125138    3472 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431861124455403,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:51:01 no-preload-816185 kubelet[3472]: E1205 20:51:01.914489    3472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vmd6" podUID="d838e6e3-bd74-4653-9289-4f5375b03d4f"
	
	
	==> storage-provisioner [92c0f24978e39c68485fd113a97875a68cbb55f2f341d2ed2baf5b273a694d58] <==
	I1205 20:37:07.792160       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 20:37:07.803702       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 20:37:07.805281       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 20:37:07.818153       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 20:37:07.818380       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-816185_33b20623-4a0d-43c8-856d-b94d6915ca61!
	I1205 20:37:07.819504       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3280737f-4498-47b2-a755-b949acc1ab4b", APIVersion:"v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-816185_33b20623-4a0d-43c8-856d-b94d6915ca61 became leader
	I1205 20:37:07.919372       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-816185_33b20623-4a0d-43c8-856d-b94d6915ca61!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-816185 -n no-preload-816185
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-816185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-8vmd6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-816185 describe pod metrics-server-6867b74b74-8vmd6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-816185 describe pod metrics-server-6867b74b74-8vmd6: exit status 1 (62.332001ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-8vmd6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-816185 describe pod metrics-server-6867b74b74-8vmd6: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (285.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (85.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.144:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.144:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-386085 -n old-k8s-version-386085
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-386085 -n old-k8s-version-386085: exit status 2 (239.504992ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-386085" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-386085 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-386085 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.335µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-386085 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386085 -n old-k8s-version-386085
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386085 -n old-k8s-version-386085: exit status 2 (238.159368ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-386085 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-386085 logs -n 25: (1.643223784s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-790679 -- sudo                         | cert-options-790679          | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:21 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-790679                                 | cert-options-790679          | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:21 UTC |
	| start   | -p no-preload-816185                                   | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-886958                           | kubernetes-upgrade-886958    | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:21 UTC |
	| start   | -p embed-certs-789000                                  | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-816185             | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-816185                                   | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-789000            | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-789000                                  | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-315387                              | cert-expiration-315387       | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-315387                              | cert-expiration-315387       | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	| delete  | -p                                                     | disable-driver-mounts-242147 | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	|         | disable-driver-mounts-242147                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:25 UTC |
	|         | default-k8s-diff-port-942599                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-386085        | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-942599  | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC | 05 Dec 24 20:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC |                     |
	|         | default-k8s-diff-port-942599                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-816185                  | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-789000                 | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-816185                                   | no-preload-816185            | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC | 05 Dec 24 20:37 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-789000                                  | embed-certs-789000           | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC | 05 Dec 24 20:35 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-386085                              | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-386085             | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC | 05 Dec 24 20:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-386085                              | old-k8s-version-386085       | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-942599       | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-942599 | jenkins | v1.34.0 | 05 Dec 24 20:28 UTC | 05 Dec 24 20:36 UTC |
	|         | default-k8s-diff-port-942599                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 20:28:03
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:28:03.038037  585929 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:28:03.038168  585929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:28:03.038178  585929 out.go:358] Setting ErrFile to fd 2...
	I1205 20:28:03.038185  585929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:28:03.038375  585929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 20:28:03.038955  585929 out.go:352] Setting JSON to false
	I1205 20:28:03.039948  585929 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":11429,"bootTime":1733419054,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:28:03.040015  585929 start.go:139] virtualization: kvm guest
	I1205 20:28:03.042326  585929 out.go:177] * [default-k8s-diff-port-942599] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:28:03.044291  585929 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 20:28:03.044320  585929 notify.go:220] Checking for updates...
	I1205 20:28:03.047072  585929 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:28:03.048480  585929 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:28:03.049796  585929 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 20:28:03.051035  585929 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:28:03.052263  585929 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:28:03.054167  585929 config.go:182] Loaded profile config "default-k8s-diff-port-942599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:28:03.054665  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:28:03.054749  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:28:03.070361  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33501
	I1205 20:28:03.070891  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:28:03.071534  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:28:03.071563  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:28:03.071995  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:28:03.072285  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:28:03.072587  585929 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:28:03.072920  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:28:03.072968  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:28:03.088186  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38669
	I1205 20:28:03.088660  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:28:03.089202  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:28:03.089224  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:28:03.089542  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:28:03.089782  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:28:03.122562  585929 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 20:28:03.123970  585929 start.go:297] selected driver: kvm2
	I1205 20:28:03.123992  585929 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-942599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-942599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.96 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:28:03.124128  585929 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:28:03.125014  585929 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:28:03.125111  585929 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20052-530897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:28:03.140461  585929 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 20:28:03.140904  585929 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:28:03.140943  585929 cni.go:84] Creating CNI manager for ""
	I1205 20:28:03.141015  585929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:28:03.141067  585929 start.go:340] cluster config:
	{Name:default-k8s-diff-port-942599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-942599 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.96 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:28:03.141179  585929 iso.go:125] acquiring lock: {Name:mk778929df466edaca8cb6d38427acedfae32b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:28:03.144215  585929 out.go:177] * Starting "default-k8s-diff-port-942599" primary control-plane node in "default-k8s-diff-port-942599" cluster
	I1205 20:28:03.276565  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:03.145620  585929 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:28:03.145661  585929 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 20:28:03.145676  585929 cache.go:56] Caching tarball of preloaded images
	I1205 20:28:03.145844  585929 preload.go:172] Found /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:28:03.145864  585929 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 20:28:03.146005  585929 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/config.json ...
	I1205 20:28:03.146240  585929 start.go:360] acquireMachinesLock for default-k8s-diff-port-942599: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:28:06.348547  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:12.428620  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:15.500614  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:21.580587  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:24.652618  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:30.732598  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:33.804612  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:39.884624  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:42.956577  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:49.036617  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:52.108607  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:28:58.188605  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:01.260573  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:07.340591  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:10.412578  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:16.492574  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:19.564578  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:25.644591  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:28.716619  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:34.796609  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:37.868605  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:43.948594  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:47.020553  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:53.100499  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:29:56.172560  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:02.252612  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:05.324648  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:11.404563  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:14.476553  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:20.556568  585025 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I1205 20:30:23.561620  585113 start.go:364] duration metric: took 4m32.790399884s to acquireMachinesLock for "embed-certs-789000"
	I1205 20:30:23.561696  585113 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:30:23.561711  585113 fix.go:54] fixHost starting: 
	I1205 20:30:23.562327  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:30:23.562400  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:30:23.578260  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38555
	I1205 20:30:23.578843  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:30:23.579379  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:30:23.579405  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:30:23.579776  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:30:23.580051  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:23.580222  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetState
	I1205 20:30:23.582161  585113 fix.go:112] recreateIfNeeded on embed-certs-789000: state=Stopped err=<nil>
	I1205 20:30:23.582190  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	W1205 20:30:23.582386  585113 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 20:30:23.584585  585113 out.go:177] * Restarting existing kvm2 VM for "embed-certs-789000" ...
	I1205 20:30:23.586583  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Start
	I1205 20:30:23.586835  585113 main.go:141] libmachine: (embed-certs-789000) Ensuring networks are active...
	I1205 20:30:23.587628  585113 main.go:141] libmachine: (embed-certs-789000) Ensuring network default is active
	I1205 20:30:23.587937  585113 main.go:141] libmachine: (embed-certs-789000) Ensuring network mk-embed-certs-789000 is active
	I1205 20:30:23.588228  585113 main.go:141] libmachine: (embed-certs-789000) Getting domain xml...
	I1205 20:30:23.588898  585113 main.go:141] libmachine: (embed-certs-789000) Creating domain...
	I1205 20:30:24.829936  585113 main.go:141] libmachine: (embed-certs-789000) Waiting to get IP...
	I1205 20:30:24.830897  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:24.831398  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:24.831465  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:24.831364  586433 retry.go:31] will retry after 208.795355ms: waiting for machine to come up
	I1205 20:30:25.042078  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:25.042657  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:25.042689  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:25.042599  586433 retry.go:31] will retry after 385.313968ms: waiting for machine to come up
	I1205 20:30:25.429439  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:25.429877  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:25.429913  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:25.429811  586433 retry.go:31] will retry after 432.591358ms: waiting for machine to come up
	I1205 20:30:23.558453  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:30:23.558508  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetMachineName
	I1205 20:30:23.558905  585025 buildroot.go:166] provisioning hostname "no-preload-816185"
	I1205 20:30:23.558943  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetMachineName
	I1205 20:30:23.559166  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:30:23.561471  585025 machine.go:96] duration metric: took 4m37.380964872s to provisionDockerMachine
	I1205 20:30:23.561518  585025 fix.go:56] duration metric: took 4m37.403172024s for fixHost
	I1205 20:30:23.561524  585025 start.go:83] releasing machines lock for "no-preload-816185", held for 4m37.40319095s
	W1205 20:30:23.561546  585025 start.go:714] error starting host: provision: host is not running
	W1205 20:30:23.561677  585025 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1205 20:30:23.561688  585025 start.go:729] Will try again in 5 seconds ...
	I1205 20:30:25.864656  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:25.865217  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:25.865255  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:25.865138  586433 retry.go:31] will retry after 571.148349ms: waiting for machine to come up
	I1205 20:30:26.437644  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:26.438220  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:26.438250  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:26.438165  586433 retry.go:31] will retry after 585.234455ms: waiting for machine to come up
	I1205 20:30:27.025107  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:27.025510  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:27.025538  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:27.025459  586433 retry.go:31] will retry after 648.291531ms: waiting for machine to come up
	I1205 20:30:27.675457  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:27.675898  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:27.675928  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:27.675838  586433 retry.go:31] will retry after 804.071148ms: waiting for machine to come up
	I1205 20:30:28.481966  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:28.482386  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:28.482416  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:28.482329  586433 retry.go:31] will retry after 905.207403ms: waiting for machine to come up
	I1205 20:30:29.388933  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:29.389546  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:29.389571  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:29.389484  586433 retry.go:31] will retry after 1.48894232s: waiting for machine to come up
	I1205 20:30:28.562678  585025 start.go:360] acquireMachinesLock for no-preload-816185: {Name:mk6b8bd9f5e6574a7838ad9c269b1c99e1910a23 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:30:30.880218  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:30.880742  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:30.880773  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:30.880685  586433 retry.go:31] will retry after 2.314200549s: waiting for machine to come up
	I1205 20:30:33.198477  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:33.198998  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:33.199029  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:33.198945  586433 retry.go:31] will retry after 1.922541264s: waiting for machine to come up
	I1205 20:30:35.123922  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:35.124579  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:35.124607  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:35.124524  586433 retry.go:31] will retry after 3.537087912s: waiting for machine to come up
	I1205 20:30:38.662839  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:38.663212  585113 main.go:141] libmachine: (embed-certs-789000) DBG | unable to find current IP address of domain embed-certs-789000 in network mk-embed-certs-789000
	I1205 20:30:38.663250  585113 main.go:141] libmachine: (embed-certs-789000) DBG | I1205 20:30:38.663160  586433 retry.go:31] will retry after 3.371938424s: waiting for machine to come up
	I1205 20:30:43.457332  585602 start.go:364] duration metric: took 3m31.488905557s to acquireMachinesLock for "old-k8s-version-386085"
	I1205 20:30:43.457418  585602 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:30:43.457427  585602 fix.go:54] fixHost starting: 
	I1205 20:30:43.457835  585602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:30:43.457891  585602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:30:43.474845  585602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33571
	I1205 20:30:43.475386  585602 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:30:43.475993  585602 main.go:141] libmachine: Using API Version  1
	I1205 20:30:43.476026  585602 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:30:43.476404  585602 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:30:43.476613  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:30:43.476778  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetState
	I1205 20:30:43.478300  585602 fix.go:112] recreateIfNeeded on old-k8s-version-386085: state=Stopped err=<nil>
	I1205 20:30:43.478329  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	W1205 20:30:43.478502  585602 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 20:30:43.480644  585602 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-386085" ...
	I1205 20:30:42.038738  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.039204  585113 main.go:141] libmachine: (embed-certs-789000) Found IP for machine: 192.168.39.200
	I1205 20:30:42.039235  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has current primary IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.039244  585113 main.go:141] libmachine: (embed-certs-789000) Reserving static IP address...
	I1205 20:30:42.039760  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "embed-certs-789000", mac: "52:54:00:48:ae:b2", ip: "192.168.39.200"} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.039806  585113 main.go:141] libmachine: (embed-certs-789000) DBG | skip adding static IP to network mk-embed-certs-789000 - found existing host DHCP lease matching {name: "embed-certs-789000", mac: "52:54:00:48:ae:b2", ip: "192.168.39.200"}
	I1205 20:30:42.039819  585113 main.go:141] libmachine: (embed-certs-789000) Reserved static IP address: 192.168.39.200
	I1205 20:30:42.039835  585113 main.go:141] libmachine: (embed-certs-789000) Waiting for SSH to be available...
	I1205 20:30:42.039843  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Getting to WaitForSSH function...
	I1205 20:30:42.042013  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.042352  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.042386  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.042542  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Using SSH client type: external
	I1205 20:30:42.042562  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa (-rw-------)
	I1205 20:30:42.042586  585113 main.go:141] libmachine: (embed-certs-789000) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.200 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:30:42.042595  585113 main.go:141] libmachine: (embed-certs-789000) DBG | About to run SSH command:
	I1205 20:30:42.042603  585113 main.go:141] libmachine: (embed-certs-789000) DBG | exit 0
	I1205 20:30:42.168573  585113 main.go:141] libmachine: (embed-certs-789000) DBG | SSH cmd err, output: <nil>: 
	I1205 20:30:42.168960  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetConfigRaw
	I1205 20:30:42.169783  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetIP
	I1205 20:30:42.172396  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.172790  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.172818  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.173023  585113 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/config.json ...
	I1205 20:30:42.173214  585113 machine.go:93] provisionDockerMachine start ...
	I1205 20:30:42.173234  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:42.173465  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.175399  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.175754  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.175785  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.175885  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:42.176063  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.176208  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.176412  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:42.176583  585113 main.go:141] libmachine: Using SSH client type: native
	I1205 20:30:42.176816  585113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1205 20:30:42.176830  585113 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:30:42.280829  585113 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 20:30:42.280861  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetMachineName
	I1205 20:30:42.281135  585113 buildroot.go:166] provisioning hostname "embed-certs-789000"
	I1205 20:30:42.281168  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetMachineName
	I1205 20:30:42.281409  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.284355  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.284692  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.284723  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.284817  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:42.285019  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.285185  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.285338  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:42.285511  585113 main.go:141] libmachine: Using SSH client type: native
	I1205 20:30:42.285716  585113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1205 20:30:42.285730  585113 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-789000 && echo "embed-certs-789000" | sudo tee /etc/hostname
	I1205 20:30:42.409310  585113 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-789000
	
	I1205 20:30:42.409370  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.412182  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.412524  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.412566  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.412779  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:42.412989  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.413137  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.413278  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:42.413468  585113 main.go:141] libmachine: Using SSH client type: native
	I1205 20:30:42.413674  585113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1205 20:30:42.413690  585113 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-789000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-789000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-789000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:30:42.529773  585113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:30:42.529806  585113 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:30:42.529829  585113 buildroot.go:174] setting up certificates
	I1205 20:30:42.529841  585113 provision.go:84] configureAuth start
	I1205 20:30:42.529850  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetMachineName
	I1205 20:30:42.530201  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetIP
	I1205 20:30:42.533115  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.533527  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.533558  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.533753  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.535921  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.536310  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.536339  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.536518  585113 provision.go:143] copyHostCerts
	I1205 20:30:42.536610  585113 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:30:42.536631  585113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:30:42.536698  585113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:30:42.536793  585113 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:30:42.536802  585113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:30:42.536826  585113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:30:42.536880  585113 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:30:42.536887  585113 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:30:42.536908  585113 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:30:42.536956  585113 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.embed-certs-789000 san=[127.0.0.1 192.168.39.200 embed-certs-789000 localhost minikube]
	I1205 20:30:42.832543  585113 provision.go:177] copyRemoteCerts
	I1205 20:30:42.832610  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:30:42.832640  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.835403  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.835669  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.835701  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.835848  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:42.836027  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.836161  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:42.836314  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:30:42.918661  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:30:42.943903  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1205 20:30:42.968233  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:30:42.993174  585113 provision.go:87] duration metric: took 463.317149ms to configureAuth
	I1205 20:30:42.993249  585113 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:30:42.993449  585113 config.go:182] Loaded profile config "embed-certs-789000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:30:42.993554  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:42.996211  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.996637  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:42.996696  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:42.996841  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:42.997049  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.997196  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:42.997305  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:42.997458  585113 main.go:141] libmachine: Using SSH client type: native
	I1205 20:30:42.997641  585113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1205 20:30:42.997656  585113 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:30:43.220096  585113 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:30:43.220127  585113 machine.go:96] duration metric: took 1.046899757s to provisionDockerMachine
	I1205 20:30:43.220141  585113 start.go:293] postStartSetup for "embed-certs-789000" (driver="kvm2")
	I1205 20:30:43.220152  585113 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:30:43.220176  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:43.220544  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:30:43.220584  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:43.223481  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.223860  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:43.223889  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.224102  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:43.224316  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:43.224483  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:43.224667  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:30:43.307878  585113 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:30:43.312875  585113 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:30:43.312905  585113 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:30:43.312981  585113 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:30:43.313058  585113 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:30:43.313169  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:30:43.323221  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:30:43.347978  585113 start.go:296] duration metric: took 127.819083ms for postStartSetup
	I1205 20:30:43.348023  585113 fix.go:56] duration metric: took 19.786318897s for fixHost
	I1205 20:30:43.348046  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:43.350639  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.351004  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:43.351026  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.351247  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:43.351478  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:43.351642  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:43.351803  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:43.351950  585113 main.go:141] libmachine: Using SSH client type: native
	I1205 20:30:43.352122  585113 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I1205 20:30:43.352133  585113 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:30:43.457130  585113 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430643.415370749
	
	I1205 20:30:43.457164  585113 fix.go:216] guest clock: 1733430643.415370749
	I1205 20:30:43.457176  585113 fix.go:229] Guest: 2024-12-05 20:30:43.415370749 +0000 UTC Remote: 2024-12-05 20:30:43.34802793 +0000 UTC m=+292.733798952 (delta=67.342819ms)
	I1205 20:30:43.457209  585113 fix.go:200] guest clock delta is within tolerance: 67.342819ms
	I1205 20:30:43.457217  585113 start.go:83] releasing machines lock for "embed-certs-789000", held for 19.895543311s
	I1205 20:30:43.457251  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:43.457563  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetIP
	I1205 20:30:43.460628  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.461002  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:43.461042  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.461175  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:43.461758  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:43.461937  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:30:43.462067  585113 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:30:43.462120  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:43.462147  585113 ssh_runner.go:195] Run: cat /version.json
	I1205 20:30:43.462169  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:30:43.464859  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.465147  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.465237  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:43.465264  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.465409  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:43.465472  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:43.465497  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:43.465589  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:43.465711  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:30:43.465768  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:43.465863  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:30:43.465907  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:30:43.466006  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:30:43.466129  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:30:43.568909  585113 ssh_runner.go:195] Run: systemctl --version
	I1205 20:30:43.575175  585113 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:30:43.725214  585113 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:30:43.732226  585113 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:30:43.732369  585113 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:30:43.750186  585113 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:30:43.750223  585113 start.go:495] detecting cgroup driver to use...
	I1205 20:30:43.750296  585113 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:30:43.767876  585113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:30:43.783386  585113 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:30:43.783465  585113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:30:43.799917  585113 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:30:43.815607  585113 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:30:43.935150  585113 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:30:44.094292  585113 docker.go:233] disabling docker service ...
	I1205 20:30:44.094378  585113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:30:44.111307  585113 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:30:44.127528  585113 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:30:44.284496  585113 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:30:44.422961  585113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:30:44.439104  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:30:44.461721  585113 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:30:44.461787  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.476398  585113 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:30:44.476463  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.489821  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.502250  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.514245  585113 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:30:44.528227  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.540205  585113 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.559447  585113 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:30:44.571434  585113 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:30:44.583635  585113 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:30:44.583717  585113 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:30:44.600954  585113 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:30:44.613381  585113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:30:44.733592  585113 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:30:44.843948  585113 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:30:44.844036  585113 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:30:44.849215  585113 start.go:563] Will wait 60s for crictl version
	I1205 20:30:44.849275  585113 ssh_runner.go:195] Run: which crictl
	I1205 20:30:44.853481  585113 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:30:44.900488  585113 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:30:44.900583  585113 ssh_runner.go:195] Run: crio --version
	I1205 20:30:44.944771  585113 ssh_runner.go:195] Run: crio --version
	I1205 20:30:44.977119  585113 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:30:44.978527  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetIP
	I1205 20:30:44.981609  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:44.982001  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:30:44.982037  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:30:44.982240  585113 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:30:44.986979  585113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:30:45.001779  585113 kubeadm.go:883] updating cluster {Name:embed-certs-789000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-789000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:30:45.001935  585113 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:30:45.002021  585113 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:30:45.041827  585113 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 20:30:45.041918  585113 ssh_runner.go:195] Run: which lz4
	I1205 20:30:45.046336  585113 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 20:30:45.050804  585113 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:30:45.050852  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 20:30:43.482307  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .Start
	I1205 20:30:43.482501  585602 main.go:141] libmachine: (old-k8s-version-386085) Ensuring networks are active...
	I1205 20:30:43.483222  585602 main.go:141] libmachine: (old-k8s-version-386085) Ensuring network default is active
	I1205 20:30:43.483574  585602 main.go:141] libmachine: (old-k8s-version-386085) Ensuring network mk-old-k8s-version-386085 is active
	I1205 20:30:43.484156  585602 main.go:141] libmachine: (old-k8s-version-386085) Getting domain xml...
	I1205 20:30:43.485045  585602 main.go:141] libmachine: (old-k8s-version-386085) Creating domain...
	I1205 20:30:44.770817  585602 main.go:141] libmachine: (old-k8s-version-386085) Waiting to get IP...
	I1205 20:30:44.772079  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:44.772538  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:44.772599  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:44.772517  586577 retry.go:31] will retry after 247.056435ms: waiting for machine to come up
	I1205 20:30:45.021096  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:45.021642  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:45.021678  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:45.021560  586577 retry.go:31] will retry after 241.543543ms: waiting for machine to come up
	I1205 20:30:45.265136  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:45.265654  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:45.265683  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:45.265596  586577 retry.go:31] will retry after 324.624293ms: waiting for machine to come up
	I1205 20:30:45.592067  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:45.592603  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:45.592636  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:45.592558  586577 retry.go:31] will retry after 408.275958ms: waiting for machine to come up
	I1205 20:30:46.002321  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:46.002872  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:46.002904  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:46.002808  586577 retry.go:31] will retry after 693.356488ms: waiting for machine to come up
	I1205 20:30:46.697505  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:46.697874  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:46.697900  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:46.697846  586577 retry.go:31] will retry after 906.807324ms: waiting for machine to come up
	I1205 20:30:46.612504  585113 crio.go:462] duration metric: took 1.56620974s to copy over tarball
	I1205 20:30:46.612585  585113 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:30:48.868826  585113 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256202653s)
	I1205 20:30:48.868863  585113 crio.go:469] duration metric: took 2.256329112s to extract the tarball
	I1205 20:30:48.868873  585113 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:30:48.906872  585113 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:30:48.955442  585113 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:30:48.955468  585113 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:30:48.955477  585113 kubeadm.go:934] updating node { 192.168.39.200 8443 v1.31.2 crio true true} ...
	I1205 20:30:48.955603  585113 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-789000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-789000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:30:48.955668  585113 ssh_runner.go:195] Run: crio config
	I1205 20:30:49.007389  585113 cni.go:84] Creating CNI manager for ""
	I1205 20:30:49.007419  585113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:30:49.007433  585113 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:30:49.007473  585113 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.200 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-789000 NodeName:embed-certs-789000 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:30:49.007656  585113 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-789000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.200"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.200"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:30:49.007734  585113 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:30:49.021862  585113 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:30:49.021949  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:30:49.032937  585113 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1205 20:30:49.053311  585113 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:30:49.073636  585113 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1205 20:30:49.094437  585113 ssh_runner.go:195] Run: grep 192.168.39.200	control-plane.minikube.internal$ /etc/hosts
	I1205 20:30:49.098470  585113 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.200	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:30:49.112013  585113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:30:49.246312  585113 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:30:49.264250  585113 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000 for IP: 192.168.39.200
	I1205 20:30:49.264301  585113 certs.go:194] generating shared ca certs ...
	I1205 20:30:49.264329  585113 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:30:49.264565  585113 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:30:49.264627  585113 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:30:49.264641  585113 certs.go:256] generating profile certs ...
	I1205 20:30:49.264775  585113 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/client.key
	I1205 20:30:49.264854  585113 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/apiserver.key.5c723d79
	I1205 20:30:49.264894  585113 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/proxy-client.key
	I1205 20:30:49.265026  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:30:49.265094  585113 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:30:49.265109  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:30:49.265144  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:30:49.265179  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:30:49.265215  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:30:49.265258  585113 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:30:49.266137  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:30:49.297886  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:30:49.339461  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:30:49.385855  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:30:49.427676  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1205 20:30:49.466359  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:30:49.492535  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:30:49.518311  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/embed-certs-789000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:30:49.543545  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:30:49.567956  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:30:49.592361  585113 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:30:49.616245  585113 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:30:49.633947  585113 ssh_runner.go:195] Run: openssl version
	I1205 20:30:49.640353  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:30:49.652467  585113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:30:49.657353  585113 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:30:49.657440  585113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:30:49.664045  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:30:49.679941  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:30:49.695153  585113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:30:49.700397  585113 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:30:49.700458  585113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:30:49.706786  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:30:49.718994  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:30:49.731470  585113 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:30:49.736654  585113 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:30:49.736725  585113 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:30:49.743034  585113 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:30:49.755334  585113 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:30:49.760378  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:30:49.766942  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:30:49.773911  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:30:49.780556  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:30:49.787004  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:30:49.793473  585113 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:30:49.800009  585113 kubeadm.go:392] StartCluster: {Name:embed-certs-789000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-789000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:30:49.800118  585113 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:30:49.800163  585113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:30:49.844520  585113 cri.go:89] found id: ""
	I1205 20:30:49.844620  585113 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:30:49.857604  585113 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 20:30:49.857640  585113 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 20:30:49.857702  585113 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:30:49.870235  585113 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:30:49.871318  585113 kubeconfig.go:125] found "embed-certs-789000" server: "https://192.168.39.200:8443"
	I1205 20:30:49.873416  585113 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:30:49.884281  585113 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.200
	I1205 20:30:49.884331  585113 kubeadm.go:1160] stopping kube-system containers ...
	I1205 20:30:49.884348  585113 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:30:49.884410  585113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:30:49.930238  585113 cri.go:89] found id: ""
	I1205 20:30:49.930351  585113 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:30:49.947762  585113 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:30:49.957878  585113 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:30:49.957902  585113 kubeadm.go:157] found existing configuration files:
	
	I1205 20:30:49.957960  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:30:49.967261  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:30:49.967342  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:30:49.977868  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:30:49.987715  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:30:49.987777  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:30:49.998157  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:30:50.008224  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:30:50.008334  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:30:50.018748  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:30:50.028204  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:30:50.028287  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:30:50.038459  585113 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:30:50.049458  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:50.175199  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:47.606601  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:47.607065  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:47.607098  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:47.607001  586577 retry.go:31] will retry after 1.007867893s: waiting for machine to come up
	I1205 20:30:48.617140  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:48.617641  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:48.617674  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:48.617608  586577 retry.go:31] will retry after 1.15317606s: waiting for machine to come up
	I1205 20:30:49.773126  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:49.773670  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:49.773699  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:49.773620  586577 retry.go:31] will retry after 1.342422822s: waiting for machine to come up
	I1205 20:30:51.117592  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:51.118034  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:51.118065  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:51.117973  586577 retry.go:31] will retry after 1.575794078s: waiting for machine to come up
	I1205 20:30:51.203131  585113 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.027881984s)
	I1205 20:30:51.203193  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:51.415679  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:51.500984  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:51.598883  585113 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:30:51.598986  585113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:30:52.099206  585113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:30:52.599755  585113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:30:52.619189  585113 api_server.go:72] duration metric: took 1.020303049s to wait for apiserver process to appear ...
	I1205 20:30:52.619236  585113 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:30:52.619268  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:52.619903  585113 api_server.go:269] stopped: https://192.168.39.200:8443/healthz: Get "https://192.168.39.200:8443/healthz": dial tcp 192.168.39.200:8443: connect: connection refused
	I1205 20:30:53.119501  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:55.342363  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:30:55.342398  585113 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:30:55.342418  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:55.471683  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:30:55.471729  585113 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:30:55.619946  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:55.634855  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:30:55.634906  585113 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:30:56.119928  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:56.128358  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:30:56.128396  585113 api_server.go:103] status: https://192.168.39.200:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:30:56.620047  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:30:56.625869  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I1205 20:30:56.633658  585113 api_server.go:141] control plane version: v1.31.2
	I1205 20:30:56.633698  585113 api_server.go:131] duration metric: took 4.014451973s to wait for apiserver health ...
	I1205 20:30:56.633712  585113 cni.go:84] Creating CNI manager for ""
	I1205 20:30:56.633721  585113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:30:56.635658  585113 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:30:52.695389  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:52.695838  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:52.695868  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:52.695784  586577 retry.go:31] will retry after 2.377931285s: waiting for machine to come up
	I1205 20:30:55.076859  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:55.077428  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:55.077469  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:55.077377  586577 retry.go:31] will retry after 2.586837249s: waiting for machine to come up
	I1205 20:30:56.637276  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:30:56.649131  585113 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:30:56.670981  585113 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:30:56.682424  585113 system_pods.go:59] 8 kube-system pods found
	I1205 20:30:56.682497  585113 system_pods.go:61] "coredns-7c65d6cfc9-hrrjc" [43d8b550-f29d-4a84-a2fc-b456abc486c2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:30:56.682508  585113 system_pods.go:61] "etcd-embed-certs-789000" [99f232e4-1bc8-4f98-8bcf-8aa61d66158b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:30:56.682519  585113 system_pods.go:61] "kube-apiserver-embed-certs-789000" [d1d11749-0ddc-4172-aaa9-bca00c64c912] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:30:56.682528  585113 system_pods.go:61] "kube-controller-manager-embed-certs-789000" [b291c993-cd10-4d0f-8c3e-a6db726cf83a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:30:56.682536  585113 system_pods.go:61] "kube-proxy-h79dj" [80abe907-24e7-4001-90a6-f4d10fd9fc6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:30:56.682544  585113 system_pods.go:61] "kube-scheduler-embed-certs-789000" [490d7afa-24fd-43c8-8088-539bb7e1eb9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 20:30:56.682556  585113 system_pods.go:61] "metrics-server-6867b74b74-tlsjl" [cd1d73a4-27d1-4e68-b7d8-6da497fc4e53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:30:56.682570  585113 system_pods.go:61] "storage-provisioner" [3246e383-4f15-4222-a50c-c5b243fda12a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:30:56.682579  585113 system_pods.go:74] duration metric: took 11.566899ms to wait for pod list to return data ...
	I1205 20:30:56.682598  585113 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:30:56.687073  585113 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:30:56.687172  585113 node_conditions.go:123] node cpu capacity is 2
	I1205 20:30:56.687222  585113 node_conditions.go:105] duration metric: took 4.613225ms to run NodePressure ...
	I1205 20:30:56.687273  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:30:56.981686  585113 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 20:30:56.985944  585113 kubeadm.go:739] kubelet initialised
	I1205 20:30:56.985968  585113 kubeadm.go:740] duration metric: took 4.256434ms waiting for restarted kubelet to initialise ...
	I1205 20:30:56.985976  585113 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:30:56.991854  585113 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hrrjc" in "kube-system" namespace to be "Ready" ...
	I1205 20:30:58.997499  585113 pod_ready.go:103] pod "coredns-7c65d6cfc9-hrrjc" in "kube-system" namespace has status "Ready":"False"
	I1205 20:30:57.667200  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:30:57.667644  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:30:57.667681  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:30:57.667592  586577 retry.go:31] will retry after 2.856276116s: waiting for machine to come up
	I1205 20:31:00.525334  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:00.525796  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | unable to find current IP address of domain old-k8s-version-386085 in network mk-old-k8s-version-386085
	I1205 20:31:00.525830  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | I1205 20:31:00.525740  586577 retry.go:31] will retry after 5.119761936s: waiting for machine to come up
	I1205 20:31:00.999102  585113 pod_ready.go:103] pod "coredns-7c65d6cfc9-hrrjc" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:01.500344  585113 pod_ready.go:93] pod "coredns-7c65d6cfc9-hrrjc" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:01.500371  585113 pod_ready.go:82] duration metric: took 4.508490852s for pod "coredns-7c65d6cfc9-hrrjc" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:01.500382  585113 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:03.506621  585113 pod_ready.go:103] pod "etcd-embed-certs-789000" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:05.007677  585113 pod_ready.go:93] pod "etcd-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:05.007703  585113 pod_ready.go:82] duration metric: took 3.507315826s for pod "etcd-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:05.007713  585113 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:05.646790  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.647230  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has current primary IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.647264  585602 main.go:141] libmachine: (old-k8s-version-386085) Found IP for machine: 192.168.72.144
	I1205 20:31:05.647278  585602 main.go:141] libmachine: (old-k8s-version-386085) Reserving static IP address...
	I1205 20:31:05.647796  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "old-k8s-version-386085", mac: "52:54:00:6a:06:a4", ip: "192.168.72.144"} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.647834  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | skip adding static IP to network mk-old-k8s-version-386085 - found existing host DHCP lease matching {name: "old-k8s-version-386085", mac: "52:54:00:6a:06:a4", ip: "192.168.72.144"}
	I1205 20:31:05.647856  585602 main.go:141] libmachine: (old-k8s-version-386085) Reserved static IP address: 192.168.72.144
	I1205 20:31:05.647872  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | Getting to WaitForSSH function...
	I1205 20:31:05.647889  585602 main.go:141] libmachine: (old-k8s-version-386085) Waiting for SSH to be available...
	I1205 20:31:05.650296  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.650610  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.650643  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.650742  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | Using SSH client type: external
	I1205 20:31:05.650779  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa (-rw-------)
	I1205 20:31:05.650816  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:31:05.650837  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | About to run SSH command:
	I1205 20:31:05.650851  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | exit 0
	I1205 20:31:05.776876  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | SSH cmd err, output: <nil>: 
	I1205 20:31:05.777311  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetConfigRaw
	I1205 20:31:05.777948  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:31:05.780609  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.781053  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.781091  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.781319  585602 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/config.json ...
	I1205 20:31:05.781585  585602 machine.go:93] provisionDockerMachine start ...
	I1205 20:31:05.781607  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:05.781942  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:05.784729  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.785155  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.785191  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.785326  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:05.785491  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:05.785659  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:05.785886  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:05.786078  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:05.786309  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:05.786323  585602 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:31:05.893034  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 20:31:05.893079  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetMachineName
	I1205 20:31:05.893388  585602 buildroot.go:166] provisioning hostname "old-k8s-version-386085"
	I1205 20:31:05.893426  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetMachineName
	I1205 20:31:05.893623  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:05.896484  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.896883  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:05.896910  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:05.897031  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:05.897252  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:05.897441  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:05.897615  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:05.897796  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:05.897965  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:05.897977  585602 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-386085 && echo "old-k8s-version-386085" | sudo tee /etc/hostname
	I1205 20:31:06.017910  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-386085
	
	I1205 20:31:06.017939  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.020956  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.021298  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.021332  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.021494  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.021678  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.021863  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.021995  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.022137  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:06.022325  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:06.022342  585602 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-386085' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-386085/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-386085' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:31:06.138200  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:31:06.138234  585602 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:31:06.138261  585602 buildroot.go:174] setting up certificates
	I1205 20:31:06.138274  585602 provision.go:84] configureAuth start
	I1205 20:31:06.138287  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetMachineName
	I1205 20:31:06.138588  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:31:06.141488  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.141909  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.141965  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.142096  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.144144  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.144720  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.144742  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.144951  585602 provision.go:143] copyHostCerts
	I1205 20:31:06.145020  585602 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:31:06.145031  585602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:31:06.145085  585602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:31:06.145206  585602 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:31:06.145219  585602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:31:06.145248  585602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:31:06.145335  585602 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:31:06.145346  585602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:31:06.145376  585602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:31:06.145452  585602 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-386085 san=[127.0.0.1 192.168.72.144 localhost minikube old-k8s-version-386085]
	I1205 20:31:06.276466  585602 provision.go:177] copyRemoteCerts
	I1205 20:31:06.276530  585602 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:31:06.276559  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.279218  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.279550  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.279578  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.279766  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.279990  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.280152  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.280317  585602 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:31:06.362479  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:31:06.387631  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1205 20:31:06.413110  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:31:06.437931  585602 provision.go:87] duration metric: took 299.641033ms to configureAuth
	I1205 20:31:06.437962  585602 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:31:06.438176  585602 config.go:182] Loaded profile config "old-k8s-version-386085": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 20:31:06.438272  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.441059  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.441413  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.441444  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.441655  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.441846  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.441992  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.442174  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.442379  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:06.442552  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:06.442568  585602 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:31:06.655666  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:31:06.655699  585602 machine.go:96] duration metric: took 874.099032ms to provisionDockerMachine
	I1205 20:31:06.655713  585602 start.go:293] postStartSetup for "old-k8s-version-386085" (driver="kvm2")
	I1205 20:31:06.655723  585602 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:31:06.655752  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.656082  585602 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:31:06.656115  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.658835  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.659178  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.659229  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.659378  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.659636  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.659808  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.659971  585602 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:31:06.744484  585602 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:31:06.749025  585602 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:31:06.749060  585602 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:31:06.749134  585602 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:31:06.749273  585602 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:31:06.749411  585602 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:31:06.760720  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:06.785449  585602 start.go:296] duration metric: took 129.720092ms for postStartSetup
	I1205 20:31:06.785500  585602 fix.go:56] duration metric: took 23.328073686s for fixHost
	I1205 20:31:06.785526  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.788417  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.788797  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.788828  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.789049  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.789296  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.789483  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.789688  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.789870  585602 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:06.790046  585602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I1205 20:31:06.790065  585602 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:31:06.897781  585929 start.go:364] duration metric: took 3m3.751494327s to acquireMachinesLock for "default-k8s-diff-port-942599"
	I1205 20:31:06.897847  585929 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:31:06.897858  585929 fix.go:54] fixHost starting: 
	I1205 20:31:06.898355  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:31:06.898419  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:31:06.916556  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40927
	I1205 20:31:06.917111  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:31:06.917648  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:31:06.917674  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:31:06.918014  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:31:06.918256  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:06.918402  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetState
	I1205 20:31:06.920077  585929 fix.go:112] recreateIfNeeded on default-k8s-diff-port-942599: state=Stopped err=<nil>
	I1205 20:31:06.920105  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	W1205 20:31:06.920257  585929 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 20:31:06.922145  585929 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-942599" ...
	I1205 20:31:06.923548  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Start
	I1205 20:31:06.923770  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Ensuring networks are active...
	I1205 20:31:06.924750  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Ensuring network default is active
	I1205 20:31:06.925240  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Ensuring network mk-default-k8s-diff-port-942599 is active
	I1205 20:31:06.925721  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Getting domain xml...
	I1205 20:31:06.926719  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Creating domain...
	I1205 20:31:06.897579  585602 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430666.872047181
	
	I1205 20:31:06.897606  585602 fix.go:216] guest clock: 1733430666.872047181
	I1205 20:31:06.897615  585602 fix.go:229] Guest: 2024-12-05 20:31:06.872047181 +0000 UTC Remote: 2024-12-05 20:31:06.785506394 +0000 UTC m=+234.970971247 (delta=86.540787ms)
	I1205 20:31:06.897679  585602 fix.go:200] guest clock delta is within tolerance: 86.540787ms
	I1205 20:31:06.897691  585602 start.go:83] releasing machines lock for "old-k8s-version-386085", held for 23.440303187s
	I1205 20:31:06.897727  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.898085  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:31:06.901127  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.901530  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.901567  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.901719  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.902413  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.902626  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .DriverName
	I1205 20:31:06.902776  585602 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:31:06.902827  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.902878  585602 ssh_runner.go:195] Run: cat /version.json
	I1205 20:31:06.902903  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHHostname
	I1205 20:31:06.905664  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.905912  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.906050  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.906086  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.906256  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.906341  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:06.906367  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:06.906411  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.906517  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHPort
	I1205 20:31:06.906613  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.906684  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHKeyPath
	I1205 20:31:06.906837  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetSSHUsername
	I1205 20:31:06.906849  585602 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:31:06.907112  585602 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/old-k8s-version-386085/id_rsa Username:docker}
	I1205 20:31:06.986078  585602 ssh_runner.go:195] Run: systemctl --version
	I1205 20:31:07.009500  585602 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:31:07.159146  585602 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:31:07.166263  585602 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:31:07.166358  585602 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:31:07.186021  585602 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:31:07.186063  585602 start.go:495] detecting cgroup driver to use...
	I1205 20:31:07.186140  585602 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:31:07.205074  585602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:31:07.221207  585602 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:31:07.221268  585602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:31:07.236669  585602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:31:07.252848  585602 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:31:07.369389  585602 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:31:07.504993  585602 docker.go:233] disabling docker service ...
	I1205 20:31:07.505101  585602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:31:07.523294  585602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:31:07.538595  585602 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:31:07.687830  585602 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:31:07.816176  585602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:31:07.833624  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:31:07.853409  585602 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1205 20:31:07.853478  585602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:07.865346  585602 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:31:07.865426  585602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:07.877962  585602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:07.889255  585602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:07.901632  585602 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:31:07.916169  585602 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:31:07.927092  585602 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:31:07.927169  585602 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:31:07.942288  585602 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:31:07.953314  585602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:08.092156  585602 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:31:08.205715  585602 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:31:08.205799  585602 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:31:08.214280  585602 start.go:563] Will wait 60s for crictl version
	I1205 20:31:08.214351  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:08.220837  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:31:08.265983  585602 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:31:08.266065  585602 ssh_runner.go:195] Run: crio --version
	I1205 20:31:08.295839  585602 ssh_runner.go:195] Run: crio --version
	I1205 20:31:08.327805  585602 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1205 20:31:07.014634  585113 pod_ready.go:103] pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:08.018024  585113 pod_ready.go:93] pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:08.018062  585113 pod_ready.go:82] duration metric: took 3.010340127s for pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.018080  585113 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.024700  585113 pod_ready.go:93] pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:08.024731  585113 pod_ready.go:82] duration metric: took 6.639434ms for pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.024744  585113 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-h79dj" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.030379  585113 pod_ready.go:93] pod "kube-proxy-h79dj" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:08.030399  585113 pod_ready.go:82] duration metric: took 5.648086ms for pod "kube-proxy-h79dj" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.030408  585113 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.036191  585113 pod_ready.go:93] pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:31:08.036211  585113 pod_ready.go:82] duration metric: took 5.797344ms for pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:08.036223  585113 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace to be "Ready" ...
	I1205 20:31:10.051737  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:08.329278  585602 main.go:141] libmachine: (old-k8s-version-386085) Calling .GetIP
	I1205 20:31:08.332352  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:08.332700  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:06:a4", ip: ""} in network mk-old-k8s-version-386085: {Iface:virbr4 ExpiryTime:2024-12-05 21:30:56 +0000 UTC Type:0 Mac:52:54:00:6a:06:a4 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:old-k8s-version-386085 Clientid:01:52:54:00:6a:06:a4}
	I1205 20:31:08.332747  585602 main.go:141] libmachine: (old-k8s-version-386085) DBG | domain old-k8s-version-386085 has defined IP address 192.168.72.144 and MAC address 52:54:00:6a:06:a4 in network mk-old-k8s-version-386085
	I1205 20:31:08.332930  585602 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1205 20:31:08.337611  585602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:08.350860  585602 kubeadm.go:883] updating cluster {Name:old-k8s-version-386085 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-386085 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:31:08.351016  585602 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 20:31:08.351090  585602 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:08.403640  585602 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 20:31:08.403716  585602 ssh_runner.go:195] Run: which lz4
	I1205 20:31:08.408211  585602 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 20:31:08.413136  585602 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:31:08.413168  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1205 20:31:10.209351  585602 crio.go:462] duration metric: took 1.801169802s to copy over tarball
	I1205 20:31:10.209438  585602 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:31:08.255781  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting to get IP...
	I1205 20:31:08.256721  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.257183  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.257262  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:08.257164  586715 retry.go:31] will retry after 301.077952ms: waiting for machine to come up
	I1205 20:31:08.559682  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.560187  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.560216  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:08.560130  586715 retry.go:31] will retry after 364.457823ms: waiting for machine to come up
	I1205 20:31:08.926774  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.927371  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:08.927401  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:08.927274  586715 retry.go:31] will retry after 461.958198ms: waiting for machine to come up
	I1205 20:31:09.390861  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:09.391502  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:09.391531  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:09.391432  586715 retry.go:31] will retry after 587.049038ms: waiting for machine to come up
	I1205 20:31:09.980451  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:09.980999  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:09.981026  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:09.980932  586715 retry.go:31] will retry after 499.551949ms: waiting for machine to come up
	I1205 20:31:10.482653  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:10.483188  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:10.483219  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:10.483135  586715 retry.go:31] will retry after 749.476034ms: waiting for machine to come up
	I1205 20:31:11.233788  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:11.234286  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:11.234315  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:11.234227  586715 retry.go:31] will retry after 768.81557ms: waiting for machine to come up
	I1205 20:31:12.004904  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:12.005427  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:12.005460  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:12.005382  586715 retry.go:31] will retry after 1.360132177s: waiting for machine to come up
	I1205 20:31:12.549406  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:15.043540  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:13.303553  585602 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.094044744s)
	I1205 20:31:13.303598  585602 crio.go:469] duration metric: took 3.094215888s to extract the tarball
	I1205 20:31:13.303610  585602 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:31:13.350989  585602 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:13.388660  585602 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 20:31:13.388702  585602 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 20:31:13.388814  585602 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:13.388822  585602 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.388832  585602 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.388853  585602 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.388881  585602 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.388904  585602 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1205 20:31:13.388823  585602 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.388859  585602 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.390414  585602 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1205 20:31:13.390924  585602 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.390941  585602 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.390924  585602 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.391016  585602 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.390927  585602 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.391373  585602 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.391378  585602 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:13.565006  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.577450  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1205 20:31:13.584653  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.597086  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.619848  585602 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1205 20:31:13.619899  585602 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.619955  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.623277  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.628407  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.697151  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.703111  585602 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1205 20:31:13.703167  585602 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1205 20:31:13.703219  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.736004  585602 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1205 20:31:13.736059  585602 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.736058  585602 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1205 20:31:13.736078  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.736094  585602 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.736104  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.736135  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.736187  585602 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1205 20:31:13.736207  585602 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.736235  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.783651  585602 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1205 20:31:13.783706  585602 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.783758  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.787597  585602 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1205 20:31:13.787649  585602 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.787656  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 20:31:13.787692  585602 ssh_runner.go:195] Run: which crictl
	I1205 20:31:13.828445  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.828491  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.828544  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.828573  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:13.828616  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.828635  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.890937  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 20:31:13.992480  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:13.992480  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 20:31:13.992600  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:13.992661  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:13.992725  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 20:31:13.992780  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:14.095364  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 20:31:14.095462  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 20:31:14.163224  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1205 20:31:14.163320  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 20:31:14.163339  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 20:31:14.163420  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 20:31:14.163510  585602 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 20:31:14.243805  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1205 20:31:14.243860  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1205 20:31:14.243881  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1205 20:31:14.287718  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1205 20:31:14.290994  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1205 20:31:14.291049  585602 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1205 20:31:14.579648  585602 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:14.728232  585602 cache_images.go:92] duration metric: took 1.339506459s to LoadCachedImages
	W1205 20:31:14.728389  585602 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I1205 20:31:14.728417  585602 kubeadm.go:934] updating node { 192.168.72.144 8443 v1.20.0 crio true true} ...
	I1205 20:31:14.728570  585602 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-386085 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386085 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:31:14.728672  585602 ssh_runner.go:195] Run: crio config
	I1205 20:31:14.778932  585602 cni.go:84] Creating CNI manager for ""
	I1205 20:31:14.778957  585602 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:31:14.778967  585602 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:31:14.778987  585602 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.144 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-386085 NodeName:old-k8s-version-386085 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1205 20:31:14.779131  585602 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-386085"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:31:14.779196  585602 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1205 20:31:14.792400  585602 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:31:14.792494  585602 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:31:14.802873  585602 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1205 20:31:14.821562  585602 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:31:14.839442  585602 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1205 20:31:14.861314  585602 ssh_runner.go:195] Run: grep 192.168.72.144	control-plane.minikube.internal$ /etc/hosts
	I1205 20:31:14.865457  585602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:14.878278  585602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:15.002193  585602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:31:15.030699  585602 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085 for IP: 192.168.72.144
	I1205 20:31:15.030734  585602 certs.go:194] generating shared ca certs ...
	I1205 20:31:15.030758  585602 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:31:15.030975  585602 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:31:15.031027  585602 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:31:15.031048  585602 certs.go:256] generating profile certs ...
	I1205 20:31:15.031206  585602 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.key
	I1205 20:31:15.031276  585602 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.key.87b35b18
	I1205 20:31:15.031324  585602 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.key
	I1205 20:31:15.031489  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:31:15.031535  585602 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:31:15.031550  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:31:15.031581  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:31:15.031612  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:31:15.031644  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:31:15.031698  585602 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:15.032410  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:31:15.063090  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:31:15.094212  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:31:15.124685  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:31:15.159953  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1205 20:31:15.204250  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:31:15.237483  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:31:15.276431  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:31:15.303774  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:31:15.328872  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:31:15.353852  585602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:31:15.380916  585602 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:31:15.401082  585602 ssh_runner.go:195] Run: openssl version
	I1205 20:31:15.407442  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:31:15.420377  585602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:15.425721  585602 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:15.425800  585602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:15.432475  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:31:15.446140  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:31:15.459709  585602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:31:15.465165  585602 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:31:15.465241  585602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:31:15.471609  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:31:15.484139  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:31:15.496636  585602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:31:15.501575  585602 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:31:15.501634  585602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:31:15.507814  585602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:31:15.521234  585602 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:31:15.526452  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:31:15.532999  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:31:15.540680  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:31:15.547455  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:31:15.553996  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:31:15.560574  585602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:31:15.568489  585602 kubeadm.go:392] StartCluster: {Name:old-k8s-version-386085 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-386085 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:31:15.568602  585602 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:31:15.568682  585602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:31:15.610693  585602 cri.go:89] found id: ""
	I1205 20:31:15.610808  585602 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:31:15.622685  585602 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 20:31:15.622709  585602 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 20:31:15.622764  585602 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:31:15.633754  585602 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:31:15.634922  585602 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-386085" does not appear in /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:31:15.635682  585602 kubeconfig.go:62] /home/jenkins/minikube-integration/20052-530897/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-386085" cluster setting kubeconfig missing "old-k8s-version-386085" context setting]
	I1205 20:31:15.636878  585602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:31:15.719767  585602 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:31:15.731576  585602 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.144
	I1205 20:31:15.731622  585602 kubeadm.go:1160] stopping kube-system containers ...
	I1205 20:31:15.731639  585602 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:31:15.731705  585602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:31:15.777769  585602 cri.go:89] found id: ""
	I1205 20:31:15.777875  585602 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:31:15.797121  585602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:31:15.807961  585602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:31:15.807991  585602 kubeadm.go:157] found existing configuration files:
	
	I1205 20:31:15.808042  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:31:15.818177  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:31:15.818270  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:31:15.829092  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:31:15.839471  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:31:15.839564  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:31:15.850035  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:31:15.859907  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:31:15.859984  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:31:15.870882  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:31:15.881475  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:31:15.881549  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:31:15.892078  585602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:31:15.904312  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:16.042308  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:16.787487  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:13.367666  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:13.368154  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:13.368185  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:13.368096  586715 retry.go:31] will retry after 1.319101375s: waiting for machine to come up
	I1205 20:31:14.689562  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:14.690039  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:14.690067  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:14.689996  586715 retry.go:31] will retry after 2.267379471s: waiting for machine to come up
	I1205 20:31:16.959412  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:16.959882  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:16.959915  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:16.959804  586715 retry.go:31] will retry after 2.871837018s: waiting for machine to come up
	I1205 20:31:17.044878  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:19.543265  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:17.036864  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:17.128855  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:17.219276  585602 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:31:17.219380  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:17.720206  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:18.219623  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:18.719555  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:19.219776  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:19.719967  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:20.219686  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:20.719806  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:21.219875  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:21.719915  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:19.834750  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:19.835299  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:19.835326  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:19.835203  586715 retry.go:31] will retry after 2.740879193s: waiting for machine to come up
	I1205 20:31:22.577264  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:22.577746  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | unable to find current IP address of domain default-k8s-diff-port-942599 in network mk-default-k8s-diff-port-942599
	I1205 20:31:22.577775  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | I1205 20:31:22.577709  586715 retry.go:31] will retry after 3.807887487s: waiting for machine to come up
	I1205 20:31:22.043635  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:24.543255  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:22.219930  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:22.719848  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:23.219674  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:23.719903  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:24.220505  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:24.719726  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:25.220161  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:25.720115  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:26.220399  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:26.719567  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:27.669618  585025 start.go:364] duration metric: took 59.106849765s to acquireMachinesLock for "no-preload-816185"
	I1205 20:31:27.669680  585025 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:31:27.669689  585025 fix.go:54] fixHost starting: 
	I1205 20:31:27.670111  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:31:27.670153  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:31:27.689600  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40519
	I1205 20:31:27.690043  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:31:27.690508  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:31:27.690530  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:31:27.690931  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:31:27.691146  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:27.691279  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetState
	I1205 20:31:27.692881  585025 fix.go:112] recreateIfNeeded on no-preload-816185: state=Stopped err=<nil>
	I1205 20:31:27.692905  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	W1205 20:31:27.693059  585025 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 20:31:27.694833  585025 out.go:177] * Restarting existing kvm2 VM for "no-preload-816185" ...
	I1205 20:31:26.389296  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.389828  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Found IP for machine: 192.168.50.96
	I1205 20:31:26.389866  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has current primary IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.389876  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Reserving static IP address...
	I1205 20:31:26.390321  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Reserved static IP address: 192.168.50.96
	I1205 20:31:26.390354  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Waiting for SSH to be available...
	I1205 20:31:26.390380  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-942599", mac: "52:54:00:f6:dd:0f", ip: "192.168.50.96"} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.390404  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | skip adding static IP to network mk-default-k8s-diff-port-942599 - found existing host DHCP lease matching {name: "default-k8s-diff-port-942599", mac: "52:54:00:f6:dd:0f", ip: "192.168.50.96"}
	I1205 20:31:26.390420  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Getting to WaitForSSH function...
	I1205 20:31:26.392509  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.392875  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.392912  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.392933  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Using SSH client type: external
	I1205 20:31:26.392988  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa (-rw-------)
	I1205 20:31:26.393057  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:31:26.393086  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | About to run SSH command:
	I1205 20:31:26.393105  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | exit 0
	I1205 20:31:26.520867  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | SSH cmd err, output: <nil>: 
	I1205 20:31:26.521212  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetConfigRaw
	I1205 20:31:26.521857  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetIP
	I1205 20:31:26.524512  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.524853  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.524883  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.525141  585929 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/config.json ...
	I1205 20:31:26.525404  585929 machine.go:93] provisionDockerMachine start ...
	I1205 20:31:26.525425  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:26.525639  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:26.527806  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.528094  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.528121  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.528257  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:26.528474  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.528635  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.528771  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:26.528902  585929 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:26.529132  585929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I1205 20:31:26.529147  585929 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:31:26.645385  585929 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 20:31:26.645429  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetMachineName
	I1205 20:31:26.645719  585929 buildroot.go:166] provisioning hostname "default-k8s-diff-port-942599"
	I1205 20:31:26.645751  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetMachineName
	I1205 20:31:26.645962  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:26.648906  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.649316  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.649346  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.649473  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:26.649686  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.649880  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.649998  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:26.650161  585929 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:26.650338  585929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I1205 20:31:26.650354  585929 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-942599 && echo "default-k8s-diff-port-942599" | sudo tee /etc/hostname
	I1205 20:31:26.780217  585929 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-942599
	
	I1205 20:31:26.780253  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:26.783240  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.783628  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.783660  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.783804  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:26.783997  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.784162  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:26.784321  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:26.784530  585929 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:26.784747  585929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I1205 20:31:26.784766  585929 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-942599' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-942599/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-942599' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:31:26.909975  585929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:31:26.910006  585929 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:31:26.910087  585929 buildroot.go:174] setting up certificates
	I1205 20:31:26.910101  585929 provision.go:84] configureAuth start
	I1205 20:31:26.910114  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetMachineName
	I1205 20:31:26.910440  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetIP
	I1205 20:31:26.913667  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.914067  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.914094  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.914321  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:26.917031  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.917430  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:26.917462  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:26.917608  585929 provision.go:143] copyHostCerts
	I1205 20:31:26.917681  585929 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:31:26.917706  585929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:31:26.917772  585929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:31:26.917889  585929 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:31:26.917900  585929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:31:26.917935  585929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:31:26.918013  585929 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:31:26.918023  585929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:31:26.918065  585929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:31:26.918163  585929 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-942599 san=[127.0.0.1 192.168.50.96 default-k8s-diff-port-942599 localhost minikube]
	I1205 20:31:27.003691  585929 provision.go:177] copyRemoteCerts
	I1205 20:31:27.003783  585929 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:31:27.003821  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.006311  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.006632  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.006665  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.006820  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.007011  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.007153  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.007274  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:31:27.094973  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:31:27.121684  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1205 20:31:27.146420  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:31:27.171049  585929 provision.go:87] duration metric: took 260.930345ms to configureAuth
	I1205 20:31:27.171083  585929 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:31:27.171268  585929 config.go:182] Loaded profile config "default-k8s-diff-port-942599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:31:27.171385  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.174287  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.174677  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.174717  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.174946  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.175168  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.175338  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.175531  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.175703  585929 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:27.175927  585929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I1205 20:31:27.175959  585929 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:31:27.416697  585929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:31:27.416724  585929 machine.go:96] duration metric: took 891.305367ms to provisionDockerMachine
	I1205 20:31:27.416737  585929 start.go:293] postStartSetup for "default-k8s-diff-port-942599" (driver="kvm2")
	I1205 20:31:27.416748  585929 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:31:27.416786  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:27.417143  585929 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:31:27.417183  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.419694  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.420041  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.420072  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.420259  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.420488  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.420681  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.420813  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:31:27.507592  585929 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:31:27.512178  585929 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:31:27.512209  585929 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:31:27.512297  585929 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:31:27.512416  585929 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:31:27.512544  585929 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:31:27.522860  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:27.550167  585929 start.go:296] duration metric: took 133.414654ms for postStartSetup
	I1205 20:31:27.550211  585929 fix.go:56] duration metric: took 20.652352836s for fixHost
	I1205 20:31:27.550240  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.553056  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.553456  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.553490  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.553631  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.553822  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.554007  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.554166  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.554372  585929 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:27.554584  585929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I1205 20:31:27.554603  585929 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:31:27.669428  585929 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430687.619179277
	
	I1205 20:31:27.669455  585929 fix.go:216] guest clock: 1733430687.619179277
	I1205 20:31:27.669467  585929 fix.go:229] Guest: 2024-12-05 20:31:27.619179277 +0000 UTC Remote: 2024-12-05 20:31:27.550217419 +0000 UTC m=+204.551998169 (delta=68.961858ms)
	I1205 20:31:27.669506  585929 fix.go:200] guest clock delta is within tolerance: 68.961858ms
	I1205 20:31:27.669514  585929 start.go:83] releasing machines lock for "default-k8s-diff-port-942599", held for 20.771694403s
	I1205 20:31:27.669559  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:27.669877  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetIP
	I1205 20:31:27.672547  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.672978  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.673009  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.673224  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:27.673788  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:27.673992  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:31:27.674125  585929 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:31:27.674176  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.674201  585929 ssh_runner.go:195] Run: cat /version.json
	I1205 20:31:27.674231  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:31:27.677006  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.677388  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.677418  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.677437  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.677565  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.677745  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.677919  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.677925  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:27.677948  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:27.678115  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:31:27.678107  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:31:27.678258  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:31:27.678382  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:31:27.678527  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:31:27.790786  585929 ssh_runner.go:195] Run: systemctl --version
	I1205 20:31:27.797092  585929 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:31:27.946053  585929 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:31:27.953979  585929 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:31:27.954073  585929 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:31:27.975059  585929 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:31:27.975090  585929 start.go:495] detecting cgroup driver to use...
	I1205 20:31:27.975160  585929 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:31:27.991738  585929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:31:28.006412  585929 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:31:28.006529  585929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:31:28.021329  585929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:31:28.037390  585929 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:31:28.155470  585929 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:31:28.326332  585929 docker.go:233] disabling docker service ...
	I1205 20:31:28.326415  585929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:31:28.343299  585929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:31:28.358147  585929 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:31:28.493547  585929 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:31:28.631184  585929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:31:28.647267  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:31:28.670176  585929 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:31:28.670269  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.686230  585929 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:31:28.686312  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.702991  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.715390  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.731909  585929 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:31:28.745042  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.757462  585929 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.779049  585929 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:28.790960  585929 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:31:28.806652  585929 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:31:28.806724  585929 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:31:28.821835  585929 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:31:28.832688  585929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:28.967877  585929 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:31:29.084571  585929 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:31:29.084666  585929 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:31:29.089892  585929 start.go:563] Will wait 60s for crictl version
	I1205 20:31:29.089958  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:31:29.094021  585929 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:31:29.132755  585929 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:31:29.132843  585929 ssh_runner.go:195] Run: crio --version
	I1205 20:31:29.161779  585929 ssh_runner.go:195] Run: crio --version
	I1205 20:31:29.194415  585929 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:31:27.042893  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:29.545284  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:27.696342  585025 main.go:141] libmachine: (no-preload-816185) Calling .Start
	I1205 20:31:27.696546  585025 main.go:141] libmachine: (no-preload-816185) Ensuring networks are active...
	I1205 20:31:27.697272  585025 main.go:141] libmachine: (no-preload-816185) Ensuring network default is active
	I1205 20:31:27.697720  585025 main.go:141] libmachine: (no-preload-816185) Ensuring network mk-no-preload-816185 is active
	I1205 20:31:27.698153  585025 main.go:141] libmachine: (no-preload-816185) Getting domain xml...
	I1205 20:31:27.698993  585025 main.go:141] libmachine: (no-preload-816185) Creating domain...
	I1205 20:31:29.005551  585025 main.go:141] libmachine: (no-preload-816185) Waiting to get IP...
	I1205 20:31:29.006633  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:29.007124  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:29.007217  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:29.007100  586921 retry.go:31] will retry after 264.716976ms: waiting for machine to come up
	I1205 20:31:29.273821  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:29.274364  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:29.274393  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:29.274318  586921 retry.go:31] will retry after 307.156436ms: waiting for machine to come up
	I1205 20:31:29.582968  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:29.583583  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:29.583621  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:29.583531  586921 retry.go:31] will retry after 335.63624ms: waiting for machine to come up
	I1205 20:31:29.921262  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:29.921823  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:29.921855  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:29.921771  586921 retry.go:31] will retry after 577.408278ms: waiting for machine to come up
	I1205 20:31:30.500556  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:30.501058  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:30.501095  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:30.500999  586921 retry.go:31] will retry after 757.019094ms: waiting for machine to come up
	I1205 20:31:27.220124  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:27.719460  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:28.220187  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:28.719599  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:29.219672  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:29.720450  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:30.220436  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:30.719573  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:31.220357  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:31.720052  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:29.195845  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetIP
	I1205 20:31:29.198779  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:29.199138  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:31:29.199171  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:31:29.199365  585929 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1205 20:31:29.204553  585929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:29.217722  585929 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-942599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-942599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.96 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:31:29.217873  585929 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:31:29.217943  585929 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:29.259006  585929 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 20:31:29.259105  585929 ssh_runner.go:195] Run: which lz4
	I1205 20:31:29.264049  585929 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 20:31:29.268978  585929 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:31:29.269019  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 20:31:30.811247  585929 crio.go:462] duration metric: took 1.547244528s to copy over tarball
	I1205 20:31:30.811340  585929 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:31:32.043543  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:34.044420  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:31.260083  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:31.260626  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:31.260658  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:31.260593  586921 retry.go:31] will retry after 593.111543ms: waiting for machine to come up
	I1205 20:31:31.854850  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:31.855286  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:31.855316  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:31.855224  586921 retry.go:31] will retry after 832.693762ms: waiting for machine to come up
	I1205 20:31:32.690035  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:32.690489  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:32.690515  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:32.690448  586921 retry.go:31] will retry after 1.128242733s: waiting for machine to come up
	I1205 20:31:33.820162  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:33.820798  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:33.820831  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:33.820732  586921 retry.go:31] will retry after 1.331730925s: waiting for machine to come up
	I1205 20:31:35.154230  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:35.154661  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:35.154690  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:35.154590  586921 retry.go:31] will retry after 2.19623815s: waiting for machine to come up
	I1205 20:31:32.220318  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:32.719780  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:33.220114  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:33.719554  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:34.220187  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:34.720021  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:35.219461  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:35.720334  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.219480  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.720159  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:33.093756  585929 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.282380101s)
	I1205 20:31:33.093791  585929 crio.go:469] duration metric: took 2.282510298s to extract the tarball
	I1205 20:31:33.093802  585929 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:31:33.132232  585929 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:33.188834  585929 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:31:33.188868  585929 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:31:33.188879  585929 kubeadm.go:934] updating node { 192.168.50.96 8444 v1.31.2 crio true true} ...
	I1205 20:31:33.189027  585929 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-942599 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-942599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:31:33.189114  585929 ssh_runner.go:195] Run: crio config
	I1205 20:31:33.235586  585929 cni.go:84] Creating CNI manager for ""
	I1205 20:31:33.235611  585929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:31:33.235621  585929 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:31:33.235644  585929 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.96 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-942599 NodeName:default-k8s-diff-port-942599 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:31:33.235770  585929 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.96
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-942599"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.96"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.96"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:31:33.235835  585929 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:31:33.246737  585929 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:31:33.246829  585929 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:31:33.257763  585929 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1205 20:31:33.276025  585929 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:31:33.294008  585929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1205 20:31:33.311640  585929 ssh_runner.go:195] Run: grep 192.168.50.96	control-plane.minikube.internal$ /etc/hosts
	I1205 20:31:33.315963  585929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.96	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:33.328834  585929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:33.439221  585929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:31:33.457075  585929 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599 for IP: 192.168.50.96
	I1205 20:31:33.457103  585929 certs.go:194] generating shared ca certs ...
	I1205 20:31:33.457131  585929 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:31:33.457337  585929 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:31:33.457407  585929 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:31:33.457420  585929 certs.go:256] generating profile certs ...
	I1205 20:31:33.457528  585929 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/client.key
	I1205 20:31:33.457612  585929 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/apiserver.key.d50b8fb2
	I1205 20:31:33.457668  585929 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/proxy-client.key
	I1205 20:31:33.457824  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:31:33.457870  585929 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:31:33.457885  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:31:33.457924  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:31:33.457959  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:31:33.457989  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:31:33.458044  585929 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:33.459092  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:31:33.502129  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:31:33.533461  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:31:33.572210  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:31:33.597643  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1205 20:31:33.621382  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:31:33.648568  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:31:33.682320  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:31:33.707415  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:31:33.733418  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:31:33.760333  585929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:31:33.794070  585929 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:31:33.813531  585929 ssh_runner.go:195] Run: openssl version
	I1205 20:31:33.820336  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:31:33.832321  585929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:33.839066  585929 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:33.839135  585929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:31:33.845526  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:31:33.857376  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:31:33.868864  585929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:31:33.873732  585929 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:31:33.873799  585929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:31:33.881275  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:31:33.893144  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:31:33.904679  585929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:31:33.909686  585929 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:31:33.909760  585929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:31:33.915937  585929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:31:33.927401  585929 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:31:33.932326  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:31:33.939165  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:31:33.945630  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:31:33.951867  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:31:33.957857  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:31:33.963994  585929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:31:33.969964  585929 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-942599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-942599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.96 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:31:33.970050  585929 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:31:33.970103  585929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:31:34.016733  585929 cri.go:89] found id: ""
	I1205 20:31:34.016814  585929 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:31:34.027459  585929 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 20:31:34.027478  585929 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 20:31:34.027523  585929 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:31:34.037483  585929 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:31:34.038588  585929 kubeconfig.go:125] found "default-k8s-diff-port-942599" server: "https://192.168.50.96:8444"
	I1205 20:31:34.041140  585929 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:31:34.050903  585929 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.96
	I1205 20:31:34.050938  585929 kubeadm.go:1160] stopping kube-system containers ...
	I1205 20:31:34.050956  585929 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:31:34.051014  585929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:31:34.090840  585929 cri.go:89] found id: ""
	I1205 20:31:34.090932  585929 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:31:34.107686  585929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:31:34.118277  585929 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:31:34.118305  585929 kubeadm.go:157] found existing configuration files:
	
	I1205 20:31:34.118359  585929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1205 20:31:34.127654  585929 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:31:34.127733  585929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:31:34.137295  585929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1205 20:31:34.147005  585929 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:31:34.147076  585929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:31:34.158576  585929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1205 20:31:34.167933  585929 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:31:34.168022  585929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:31:34.177897  585929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1205 20:31:34.187467  585929 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:31:34.187539  585929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:31:34.197825  585929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:31:34.210775  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:34.337491  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:35.308389  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:35.549708  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:35.624390  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:31:35.706794  585929 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:31:35.706912  585929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.207620  585929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.707990  585929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:36.727214  585929 api_server.go:72] duration metric: took 1.020418782s to wait for apiserver process to appear ...
	I1205 20:31:36.727257  585929 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:31:36.727289  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:36.727908  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": dial tcp 192.168.50.96:8444: connect: connection refused
	I1205 20:31:37.228102  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:36.544564  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:39.043806  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:37.352371  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:37.352911  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:37.352946  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:37.352862  586921 retry.go:31] will retry after 2.333670622s: waiting for machine to come up
	I1205 20:31:39.688034  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:39.688597  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:39.688630  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:39.688537  586921 retry.go:31] will retry after 2.476657304s: waiting for machine to come up
	I1205 20:31:37.219933  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:37.720360  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:38.219574  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:38.720034  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:39.219449  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:39.719752  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:40.219718  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:40.719771  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:41.219548  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:41.720381  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:42.228416  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:31:42.228489  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:41.044569  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:43.542439  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:45.543063  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:42.168384  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:42.168759  585025 main.go:141] libmachine: (no-preload-816185) DBG | unable to find current IP address of domain no-preload-816185 in network mk-no-preload-816185
	I1205 20:31:42.168781  585025 main.go:141] libmachine: (no-preload-816185) DBG | I1205 20:31:42.168719  586921 retry.go:31] will retry after 3.531210877s: waiting for machine to come up
	I1205 20:31:45.701387  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.701831  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has current primary IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.701868  585025 main.go:141] libmachine: (no-preload-816185) Found IP for machine: 192.168.61.37
	I1205 20:31:45.701882  585025 main.go:141] libmachine: (no-preload-816185) Reserving static IP address...
	I1205 20:31:45.702270  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "no-preload-816185", mac: "52:54:00:5f:85:a7", ip: "192.168.61.37"} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:45.702313  585025 main.go:141] libmachine: (no-preload-816185) DBG | skip adding static IP to network mk-no-preload-816185 - found existing host DHCP lease matching {name: "no-preload-816185", mac: "52:54:00:5f:85:a7", ip: "192.168.61.37"}
	I1205 20:31:45.702327  585025 main.go:141] libmachine: (no-preload-816185) Reserved static IP address: 192.168.61.37
	I1205 20:31:45.702343  585025 main.go:141] libmachine: (no-preload-816185) Waiting for SSH to be available...
	I1205 20:31:45.702355  585025 main.go:141] libmachine: (no-preload-816185) DBG | Getting to WaitForSSH function...
	I1205 20:31:45.704606  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.704941  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:45.704964  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.705115  585025 main.go:141] libmachine: (no-preload-816185) DBG | Using SSH client type: external
	I1205 20:31:45.705146  585025 main.go:141] libmachine: (no-preload-816185) DBG | Using SSH private key: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa (-rw-------)
	I1205 20:31:45.705181  585025 main.go:141] libmachine: (no-preload-816185) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.37 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:31:45.705212  585025 main.go:141] libmachine: (no-preload-816185) DBG | About to run SSH command:
	I1205 20:31:45.705224  585025 main.go:141] libmachine: (no-preload-816185) DBG | exit 0
	I1205 20:31:45.828472  585025 main.go:141] libmachine: (no-preload-816185) DBG | SSH cmd err, output: <nil>: 
	I1205 20:31:45.828882  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetConfigRaw
	I1205 20:31:45.829596  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetIP
	I1205 20:31:45.832338  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.832643  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:45.832671  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.832970  585025 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/config.json ...
	I1205 20:31:45.833244  585025 machine.go:93] provisionDockerMachine start ...
	I1205 20:31:45.833275  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:45.833498  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:45.835937  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.836344  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:45.836375  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.836555  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:45.836744  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:45.836906  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:45.837046  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:45.837207  585025 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:45.837441  585025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.37 22 <nil> <nil>}
	I1205 20:31:45.837456  585025 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:31:45.940890  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 20:31:45.940926  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetMachineName
	I1205 20:31:45.941234  585025 buildroot.go:166] provisioning hostname "no-preload-816185"
	I1205 20:31:45.941262  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetMachineName
	I1205 20:31:45.941453  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:45.944124  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.944537  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:45.944585  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:45.944677  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:45.944862  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:45.945026  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:45.945169  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:45.945343  585025 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:45.945511  585025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.37 22 <nil> <nil>}
	I1205 20:31:45.945523  585025 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-816185 && echo "no-preload-816185" | sudo tee /etc/hostname
	I1205 20:31:42.220435  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:42.720366  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:43.219567  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:43.719652  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:44.220259  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:44.719556  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:45.219850  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:45.720302  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:46.220377  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:46.720107  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:47.229369  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:31:47.229421  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:46.063755  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-816185
	
	I1205 20:31:46.063794  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:46.066742  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.067177  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.067208  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.067371  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:46.067576  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.067756  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.067937  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:46.068147  585025 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:46.068392  585025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.37 22 <nil> <nil>}
	I1205 20:31:46.068411  585025 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-816185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-816185/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-816185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:31:46.182072  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:31:46.182110  585025 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20052-530897/.minikube CaCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20052-530897/.minikube}
	I1205 20:31:46.182144  585025 buildroot.go:174] setting up certificates
	I1205 20:31:46.182160  585025 provision.go:84] configureAuth start
	I1205 20:31:46.182172  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetMachineName
	I1205 20:31:46.182490  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetIP
	I1205 20:31:46.185131  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.185461  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.185493  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.185684  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:46.188070  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.188467  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.188499  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.188606  585025 provision.go:143] copyHostCerts
	I1205 20:31:46.188674  585025 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem, removing ...
	I1205 20:31:46.188695  585025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem
	I1205 20:31:46.188753  585025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/ca.pem (1078 bytes)
	I1205 20:31:46.188860  585025 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem, removing ...
	I1205 20:31:46.188872  585025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem
	I1205 20:31:46.188892  585025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/cert.pem (1123 bytes)
	I1205 20:31:46.188973  585025 exec_runner.go:144] found /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem, removing ...
	I1205 20:31:46.188980  585025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem
	I1205 20:31:46.188998  585025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20052-530897/.minikube/key.pem (1679 bytes)
	I1205 20:31:46.189044  585025 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem org=jenkins.no-preload-816185 san=[127.0.0.1 192.168.61.37 localhost minikube no-preload-816185]
	I1205 20:31:46.460195  585025 provision.go:177] copyRemoteCerts
	I1205 20:31:46.460323  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:31:46.460394  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:46.463701  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.464171  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.464224  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.464422  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:46.464646  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.464839  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:46.465024  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:31:46.557665  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 20:31:46.583225  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:31:46.608114  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:31:46.633059  585025 provision.go:87] duration metric: took 450.879004ms to configureAuth
	I1205 20:31:46.633100  585025 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:31:46.633319  585025 config.go:182] Loaded profile config "no-preload-816185": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:31:46.633400  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:46.636634  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.637103  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.637138  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.637368  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:46.637624  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.637841  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.638000  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:46.638189  585025 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:46.638425  585025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.37 22 <nil> <nil>}
	I1205 20:31:46.638442  585025 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:31:46.877574  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:31:46.877610  585025 machine.go:96] duration metric: took 1.044347044s to provisionDockerMachine
	I1205 20:31:46.877623  585025 start.go:293] postStartSetup for "no-preload-816185" (driver="kvm2")
	I1205 20:31:46.877634  585025 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:31:46.877668  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:46.878007  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:31:46.878046  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:46.881022  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.881361  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:46.881422  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:46.881554  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:46.881741  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:46.881883  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:46.882045  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:31:46.967997  585025 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:31:46.972667  585025 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:31:46.972697  585025 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/addons for local assets ...
	I1205 20:31:46.972770  585025 filesync.go:126] Scanning /home/jenkins/minikube-integration/20052-530897/.minikube/files for local assets ...
	I1205 20:31:46.972844  585025 filesync.go:149] local asset: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem -> 5381862.pem in /etc/ssl/certs
	I1205 20:31:46.972931  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:31:46.983157  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:31:47.009228  585025 start.go:296] duration metric: took 131.588013ms for postStartSetup
	I1205 20:31:47.009272  585025 fix.go:56] duration metric: took 19.33958416s for fixHost
	I1205 20:31:47.009296  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:47.012039  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.012388  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:47.012416  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.012620  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:47.012858  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:47.013022  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:47.013166  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:47.013318  585025 main.go:141] libmachine: Using SSH client type: native
	I1205 20:31:47.013490  585025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.37 22 <nil> <nil>}
	I1205 20:31:47.013501  585025 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:31:47.117166  585025 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430707.083043174
	
	I1205 20:31:47.117195  585025 fix.go:216] guest clock: 1733430707.083043174
	I1205 20:31:47.117203  585025 fix.go:229] Guest: 2024-12-05 20:31:47.083043174 +0000 UTC Remote: 2024-12-05 20:31:47.009275956 +0000 UTC m=+361.003271038 (delta=73.767218ms)
	I1205 20:31:47.117226  585025 fix.go:200] guest clock delta is within tolerance: 73.767218ms
	I1205 20:31:47.117232  585025 start.go:83] releasing machines lock for "no-preload-816185", held for 19.447576666s
	I1205 20:31:47.117259  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:47.117541  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetIP
	I1205 20:31:47.120283  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.120627  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:47.120653  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.120805  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:47.121301  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:47.121492  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:31:47.121612  585025 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:31:47.121656  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:47.121727  585025 ssh_runner.go:195] Run: cat /version.json
	I1205 20:31:47.121750  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:31:47.124146  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.124387  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.124503  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:47.124530  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.124723  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:47.124745  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:47.124745  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:47.124922  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:31:47.124933  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:47.125086  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:47.125126  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:31:47.125227  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:31:47.125505  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:31:47.125653  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:31:47.221731  585025 ssh_runner.go:195] Run: systemctl --version
	I1205 20:31:47.228177  585025 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:31:47.377695  585025 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:31:47.384534  585025 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:31:47.384623  585025 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:31:47.402354  585025 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:31:47.402388  585025 start.go:495] detecting cgroup driver to use...
	I1205 20:31:47.402454  585025 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:31:47.426593  585025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:31:47.443953  585025 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:31:47.444011  585025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:31:47.461107  585025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:31:47.477872  585025 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:31:47.617097  585025 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:31:47.780021  585025 docker.go:233] disabling docker service ...
	I1205 20:31:47.780140  585025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:31:47.795745  585025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:31:47.809573  585025 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:31:47.959910  585025 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:31:48.081465  585025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:31:48.096513  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:31:48.116342  585025 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:31:48.116409  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.128016  585025 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:31:48.128095  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.139511  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.151241  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.162858  585025 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:31:48.174755  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.185958  585025 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.203724  585025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:31:48.215682  585025 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:31:48.226478  585025 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:31:48.226551  585025 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:31:48.242781  585025 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:31:48.254921  585025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:31:48.373925  585025 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:31:48.471515  585025 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:31:48.471625  585025 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:31:48.477640  585025 start.go:563] Will wait 60s for crictl version
	I1205 20:31:48.477707  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:48.481862  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:31:48.521367  585025 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:31:48.521465  585025 ssh_runner.go:195] Run: crio --version
	I1205 20:31:48.552343  585025 ssh_runner.go:195] Run: crio --version
	I1205 20:31:48.583089  585025 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:31:48.043043  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:50.043172  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:48.584504  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetIP
	I1205 20:31:48.587210  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:48.587539  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:31:48.587568  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:31:48.587788  585025 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1205 20:31:48.592190  585025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:31:48.606434  585025 kubeadm.go:883] updating cluster {Name:no-preload-816185 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-816185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.37 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:31:48.606605  585025 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:31:48.606666  585025 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:31:48.642948  585025 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 20:31:48.642978  585025 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 20:31:48.643061  585025 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:48.643116  585025 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:48.643092  585025 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:48.643168  585025 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:48.643075  585025 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:48.643116  585025 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:48.643248  585025 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1205 20:31:48.643119  585025 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:48.644692  585025 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:48.644712  585025 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1205 20:31:48.644694  585025 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:48.644798  585025 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:48.644800  585025 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:48.644824  585025 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:48.644858  585025 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:48.644824  585025 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:48.811007  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:48.819346  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:48.859678  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1205 20:31:48.864065  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:48.864191  585025 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1205 20:31:48.864249  585025 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:48.864310  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:48.883959  585025 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1205 20:31:48.884022  585025 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:48.884078  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:48.902180  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:48.918167  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:48.946617  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:49.039706  585025 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1205 20:31:49.039760  585025 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:49.039783  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:49.039808  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:49.039869  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:49.039887  585025 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1205 20:31:49.039913  585025 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:49.039938  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:49.039947  585025 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1205 20:31:49.039969  585025 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:49.040001  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:49.040002  585025 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1205 20:31:49.040026  585025 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:49.040069  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:49.098900  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:49.098990  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:49.105551  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:49.105588  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:49.105612  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:49.105646  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:49.201473  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 20:31:49.218211  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:49.257277  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 20:31:49.257335  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:49.257345  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:49.257479  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:49.316037  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1205 20:31:49.316135  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 20:31:49.316159  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 20:31:49.356780  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1205 20:31:49.356906  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1205 20:31:49.382843  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 20:31:49.405772  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 20:31:49.405863  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 20:31:49.428491  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1205 20:31:49.428541  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1205 20:31:49.428563  585025 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 20:31:49.428587  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1205 20:31:49.428611  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 20:31:49.428648  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 20:31:49.487794  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1205 20:31:49.487825  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1205 20:31:49.487893  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1205 20:31:49.487917  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1205 20:31:49.487927  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 20:31:49.488022  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 20:31:49.830311  585025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:47.219913  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:47.720441  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:48.220220  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:48.719997  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:49.219843  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:49.719591  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:50.220132  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:50.719528  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:51.219674  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:51.720234  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:52.230527  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:31:52.230575  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:52.543415  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:55.042668  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:52.150499  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.721854606s)
	I1205 20:31:52.150547  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1205 20:31:52.150573  585025 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1205 20:31:52.150588  585025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.721911838s)
	I1205 20:31:52.150623  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1205 20:31:52.150627  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1205 20:31:52.150697  585025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: (2.662646854s)
	I1205 20:31:52.150727  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1205 20:31:52.150752  585025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (2.662648047s)
	I1205 20:31:52.150776  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1205 20:31:52.150785  585025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.662799282s)
	I1205 20:31:52.150804  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1205 20:31:52.150834  585025 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.320487562s)
	I1205 20:31:52.150874  585025 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1205 20:31:52.150907  585025 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:52.150943  585025 ssh_runner.go:195] Run: which crictl
	I1205 20:31:55.858372  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.707687772s)
	I1205 20:31:55.858414  585025 ssh_runner.go:235] Completed: which crictl: (3.707446137s)
	I1205 20:31:55.858498  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:55.858426  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1205 20:31:55.858580  585025 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 20:31:55.858640  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 20:31:55.901375  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:31:52.219602  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:52.719522  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:53.220117  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:53.720426  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:54.220177  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:54.720100  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:55.219569  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:55.719796  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:56.219490  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:56.720420  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:57.231370  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:31:57.231415  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:57.612431  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": read tcp 192.168.50.1:36198->192.168.50.96:8444: read: connection reset by peer
	I1205 20:31:57.727638  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:31:57.728368  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": dial tcp 192.168.50.96:8444: connect: connection refused
	I1205 20:31:57.042989  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:59.043517  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:31:57.843623  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.984954959s)
	I1205 20:31:57.843662  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1205 20:31:57.843683  585025 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 20:31:57.843731  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 20:31:57.843732  585025 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.942323285s)
	I1205 20:31:57.843821  585025 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:32:00.030765  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.186998467s)
	I1205 20:32:00.030810  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1205 20:32:00.030840  585025 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 20:32:00.030846  585025 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.18699947s)
	I1205 20:32:00.030897  585025 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 20:32:00.030906  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 20:32:00.031026  585025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:31:57.219497  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:57.720337  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:58.219807  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:58.720112  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:59.219949  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:59.719626  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:00.219871  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:00.719466  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:01.219491  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:01.719760  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:31:58.227807  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:01.044658  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:03.542453  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:05.542887  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:01.486433  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.455500806s)
	I1205 20:32:01.486479  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1205 20:32:01.486512  585025 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1205 20:32:01.486513  585025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.455460879s)
	I1205 20:32:01.486589  585025 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1205 20:32:01.486592  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1205 20:32:03.658906  585025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.172262326s)
	I1205 20:32:03.658947  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1205 20:32:03.658979  585025 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:32:03.659024  585025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:32:04.304774  585025 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20052-530897/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 20:32:04.304825  585025 cache_images.go:123] Successfully loaded all cached images
	I1205 20:32:04.304832  585025 cache_images.go:92] duration metric: took 15.661840579s to LoadCachedImages
	I1205 20:32:04.304846  585025 kubeadm.go:934] updating node { 192.168.61.37 8443 v1.31.2 crio true true} ...
	I1205 20:32:04.304983  585025 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-816185 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.37
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-816185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:32:04.305057  585025 ssh_runner.go:195] Run: crio config
	I1205 20:32:04.350303  585025 cni.go:84] Creating CNI manager for ""
	I1205 20:32:04.350332  585025 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:32:04.350352  585025 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:32:04.350383  585025 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.37 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-816185 NodeName:no-preload-816185 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.37"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.37 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:32:04.350534  585025 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.37
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-816185"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.37"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.37"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:32:04.350618  585025 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:32:04.362733  585025 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:32:04.362815  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:32:04.374219  585025 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1205 20:32:04.392626  585025 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:32:04.409943  585025 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I1205 20:32:04.428180  585025 ssh_runner.go:195] Run: grep 192.168.61.37	control-plane.minikube.internal$ /etc/hosts
	I1205 20:32:04.432433  585025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.37	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:32:04.447274  585025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:32:04.591755  585025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:32:04.609441  585025 certs.go:68] Setting up /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185 for IP: 192.168.61.37
	I1205 20:32:04.609472  585025 certs.go:194] generating shared ca certs ...
	I1205 20:32:04.609494  585025 certs.go:226] acquiring lock for ca certs: {Name:mkcf23d68bcb6730f7d82493f51aee9d91d32e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:32:04.609664  585025 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key
	I1205 20:32:04.609729  585025 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key
	I1205 20:32:04.609745  585025 certs.go:256] generating profile certs ...
	I1205 20:32:04.609910  585025 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/client.key
	I1205 20:32:04.609991  585025 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/apiserver.key.e9b85612
	I1205 20:32:04.610027  585025 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/proxy-client.key
	I1205 20:32:04.610146  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem (1338 bytes)
	W1205 20:32:04.610173  585025 certs.go:480] ignoring /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186_empty.pem, impossibly tiny 0 bytes
	I1205 20:32:04.610182  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 20:32:04.610216  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:32:04.610264  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:32:04.610313  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/certs/key.pem (1679 bytes)
	I1205 20:32:04.610377  585025 certs.go:484] found cert: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem (1708 bytes)
	I1205 20:32:04.611264  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:32:04.642976  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:32:04.679840  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:32:04.707526  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1205 20:32:04.746333  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 20:32:04.782671  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:32:04.819333  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:32:04.845567  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:32:04.870304  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:32:04.894597  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/certs/538186.pem --> /usr/share/ca-certificates/538186.pem (1338 bytes)
	I1205 20:32:04.918482  585025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/ssl/certs/5381862.pem --> /usr/share/ca-certificates/5381862.pem (1708 bytes)
	I1205 20:32:04.942992  585025 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:32:04.960576  585025 ssh_runner.go:195] Run: openssl version
	I1205 20:32:04.966908  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/538186.pem && ln -fs /usr/share/ca-certificates/538186.pem /etc/ssl/certs/538186.pem"
	I1205 20:32:04.978238  585025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/538186.pem
	I1205 20:32:04.982959  585025 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:15 /usr/share/ca-certificates/538186.pem
	I1205 20:32:04.983023  585025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/538186.pem
	I1205 20:32:04.989070  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/538186.pem /etc/ssl/certs/51391683.0"
	I1205 20:32:05.000979  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5381862.pem && ln -fs /usr/share/ca-certificates/5381862.pem /etc/ssl/certs/5381862.pem"
	I1205 20:32:05.012901  585025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5381862.pem
	I1205 20:32:05.017583  585025 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:15 /usr/share/ca-certificates/5381862.pem
	I1205 20:32:05.018169  585025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5381862.pem
	I1205 20:32:05.025450  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5381862.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:32:05.037419  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:32:05.050366  585025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:32:05.055211  585025 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:32:05.055255  585025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:32:05.061388  585025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:32:05.074182  585025 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:32:05.079129  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:32:05.085580  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:32:05.091938  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:32:05.099557  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:32:05.105756  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:32:05.112019  585025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:32:05.118426  585025 kubeadm.go:392] StartCluster: {Name:no-preload-816185 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-816185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.37 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:32:05.118540  585025 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:32:05.118622  585025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:32:05.162731  585025 cri.go:89] found id: ""
	I1205 20:32:05.162821  585025 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:32:05.174100  585025 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 20:32:05.174127  585025 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 20:32:05.174181  585025 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:32:05.184949  585025 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:32:05.186127  585025 kubeconfig.go:125] found "no-preload-816185" server: "https://192.168.61.37:8443"
	I1205 20:32:05.188601  585025 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:32:05.198779  585025 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.37
	I1205 20:32:05.198815  585025 kubeadm.go:1160] stopping kube-system containers ...
	I1205 20:32:05.198828  585025 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:32:05.198881  585025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:32:05.241175  585025 cri.go:89] found id: ""
	I1205 20:32:05.241247  585025 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:32:05.259698  585025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:32:05.270282  585025 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:32:05.270310  585025 kubeadm.go:157] found existing configuration files:
	
	I1205 20:32:05.270370  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:32:05.280440  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:32:05.280519  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:32:05.290825  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:32:05.300680  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:32:05.300745  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:32:05.311108  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:32:05.320854  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:32:05.320918  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:32:05.331099  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:32:05.340948  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:32:05.341017  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:32:05.351280  585025 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:32:05.361567  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:05.477138  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:02.220337  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:02.720145  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:03.219463  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:03.719913  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:04.219813  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:04.719940  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:05.219830  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:05.720324  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:06.220287  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:06.719584  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:03.228372  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:32:03.228433  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:08.042416  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:10.043011  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:06.259256  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:06.483460  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:06.557633  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:06.666782  585025 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:32:06.666885  585025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:07.167840  585025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:07.667069  585025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:07.701559  585025 api_server.go:72] duration metric: took 1.034769472s to wait for apiserver process to appear ...
	I1205 20:32:07.701592  585025 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:32:07.701612  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:10.640462  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:32:10.640498  585025 api_server.go:103] status: https://192.168.61.37:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:32:10.640521  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:10.647093  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:32:10.647118  585025 api_server.go:103] status: https://192.168.61.37:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:32:10.702286  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:10.711497  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:10.711528  585025 api_server.go:103] status: https://192.168.61.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:07.219989  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:07.720289  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:08.220381  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:08.719947  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:09.219838  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:09.719666  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:10.219756  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:10.720312  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:11.220369  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:11.720004  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:11.202247  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:11.206625  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:11.206650  585025 api_server.go:103] status: https://192.168.61.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:11.702760  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:11.718941  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:11.718974  585025 api_server.go:103] status: https://192.168.61.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:12.202567  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:32:12.207589  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 200:
	ok
	I1205 20:32:12.214275  585025 api_server.go:141] control plane version: v1.31.2
	I1205 20:32:12.214304  585025 api_server.go:131] duration metric: took 4.512704501s to wait for apiserver health ...
	I1205 20:32:12.214314  585025 cni.go:84] Creating CNI manager for ""
	I1205 20:32:12.214321  585025 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:32:12.216193  585025 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:32:08.229499  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:32:08.229544  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:12.545378  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:15.043628  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:12.217640  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:32:12.241907  585025 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:32:12.262114  585025 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:32:12.275246  585025 system_pods.go:59] 8 kube-system pods found
	I1205 20:32:12.275296  585025 system_pods.go:61] "coredns-7c65d6cfc9-j2hr2" [9ce413ab-c304-40dd-af68-80f15db0e2ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:32:12.275308  585025 system_pods.go:61] "etcd-no-preload-816185" [ddc20062-02d9-4f9d-a2fb-fa2c7d6aa1cc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:32:12.275319  585025 system_pods.go:61] "kube-apiserver-no-preload-816185" [07ff76f2-b05e-4434-b8f9-448bc200507a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:32:12.275328  585025 system_pods.go:61] "kube-controller-manager-no-preload-816185" [7c701058-791a-4097-a913-f6989a791067] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:32:12.275340  585025 system_pods.go:61] "kube-proxy-rjp4j" [340e9ccc-0290-4d3d-829c-44ad65410f3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:32:12.275348  585025 system_pods.go:61] "kube-scheduler-no-preload-816185" [c2f3b04c-9e3a-4060-a6d0-fb9eb2aa5e55] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 20:32:12.275359  585025 system_pods.go:61] "metrics-server-6867b74b74-vjwq2" [47ff24fe-0edb-4d06-b280-a0d965b25dae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:32:12.275367  585025 system_pods.go:61] "storage-provisioner" [bd385e87-56ea-417c-a4a8-b8a6e4f94114] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:32:12.275376  585025 system_pods.go:74] duration metric: took 13.23725ms to wait for pod list to return data ...
	I1205 20:32:12.275387  585025 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:32:12.279719  585025 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:32:12.279746  585025 node_conditions.go:123] node cpu capacity is 2
	I1205 20:32:12.279755  585025 node_conditions.go:105] duration metric: took 4.364464ms to run NodePressure ...
	I1205 20:32:12.279774  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:12.562221  585025 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 20:32:12.566599  585025 kubeadm.go:739] kubelet initialised
	I1205 20:32:12.566627  585025 kubeadm.go:740] duration metric: took 4.374855ms waiting for restarted kubelet to initialise ...
	I1205 20:32:12.566639  585025 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:32:12.571780  585025 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-j2hr2" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:14.579614  585025 pod_ready.go:103] pod "coredns-7c65d6cfc9-j2hr2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:12.220304  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:12.720348  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:13.219553  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:13.720078  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:14.219614  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:14.719625  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:15.220118  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:15.720577  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:16.220392  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:16.719538  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:13.230519  585929 api_server.go:269] stopped: https://192.168.50.96:8444/healthz: Get "https://192.168.50.96:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:32:13.230567  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:16.061543  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:32:16.061583  585929 api_server.go:103] status: https://192.168.50.96:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:32:16.061603  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:16.078424  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:32:16.078457  585929 api_server.go:103] status: https://192.168.50.96:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:32:16.227852  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:16.553664  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:16.553705  585929 api_server.go:103] status: https://192.168.50.96:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:16.728155  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:16.734800  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:16.734853  585929 api_server.go:103] status: https://192.168.50.96:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:17.228013  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:17.233541  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 20:32:17.233577  585929 api_server.go:103] status: https://192.168.50.96:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 20:32:17.727878  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:32:17.736731  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 200:
	ok
	I1205 20:32:17.746474  585929 api_server.go:141] control plane version: v1.31.2
	I1205 20:32:17.746511  585929 api_server.go:131] duration metric: took 41.019245279s to wait for apiserver health ...
	I1205 20:32:17.746523  585929 cni.go:84] Creating CNI manager for ""
	I1205 20:32:17.746531  585929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:32:17.748464  585929 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:32:17.750113  585929 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:32:17.762750  585929 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:32:17.786421  585929 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:32:17.826859  585929 system_pods.go:59] 8 kube-system pods found
	I1205 20:32:17.826918  585929 system_pods.go:61] "coredns-7c65d6cfc9-5drgc" [4adbcbc8-0974-4ed3-90d4-fc7f75ff83b6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:32:17.826934  585929 system_pods.go:61] "etcd-default-k8s-diff-port-942599" [4041a965-abf4-45b3-a180-118601e72573] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:32:17.826946  585929 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-942599" [ae1d7788-4feb-4e02-b0b2-bcaff984ff99] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:32:17.826959  585929 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-942599" [5cfb734e-5a10-4066-95a1-b884817a0aea] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:32:17.826969  585929 system_pods.go:61] "kube-proxy-5vdcq" [be2e18fd-6980-45c9-87a4-f6d1ed31bf7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:32:17.826980  585929 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-942599" [8deda727-a6c3-4523-8755-76217f6a8ddb] Running
	I1205 20:32:17.826989  585929 system_pods.go:61] "metrics-server-6867b74b74-rq8xm" [99b577fd-fbfd-4178-8b06-ef96f118c30b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:32:17.827000  585929 system_pods.go:61] "storage-provisioner" [8a858ec2-dc10-4501-8efa-72e2ea0c7927] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:32:17.827010  585929 system_pods.go:74] duration metric: took 40.565274ms to wait for pod list to return data ...
	I1205 20:32:17.827025  585929 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:32:17.838000  585929 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:32:17.838034  585929 node_conditions.go:123] node cpu capacity is 2
	I1205 20:32:17.838050  585929 node_conditions.go:105] duration metric: took 11.010352ms to run NodePressure ...
	I1205 20:32:17.838075  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:32:18.215713  585929 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 20:32:18.222162  585929 kubeadm.go:739] kubelet initialised
	I1205 20:32:18.222187  585929 kubeadm.go:740] duration metric: took 6.444578ms waiting for restarted kubelet to initialise ...
	I1205 20:32:18.222199  585929 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:32:18.226988  585929 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:18.235570  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.235600  585929 pod_ready.go:82] duration metric: took 8.582972ms for pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:18.235609  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.235617  585929 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:18.242596  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.242623  585929 pod_ready.go:82] duration metric: took 6.99814ms for pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:18.242634  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.242642  585929 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:18.248351  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.248373  585929 pod_ready.go:82] duration metric: took 5.725371ms for pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:18.248383  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.248390  585929 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:18.258151  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.258174  585929 pod_ready.go:82] duration metric: took 9.778119ms for pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:18.258183  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.258190  585929 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5vdcq" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:18.619579  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "kube-proxy-5vdcq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.619623  585929 pod_ready.go:82] duration metric: took 361.426091ms for pod "kube-proxy-5vdcq" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:18.619638  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "kube-proxy-5vdcq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:18.619649  585929 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:19.019623  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:19.019655  585929 pod_ready.go:82] duration metric: took 399.997558ms for pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:19.019669  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:19.019676  585929 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:19.420201  585929 pod_ready.go:98] node "default-k8s-diff-port-942599" hosting pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:19.420228  585929 pod_ready.go:82] duration metric: took 400.54576ms for pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace to be "Ready" ...
	E1205 20:32:19.420242  585929 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-942599" hosting pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:19.420251  585929 pod_ready.go:39] duration metric: took 1.198040831s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:32:19.420292  585929 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:32:19.434385  585929 ops.go:34] apiserver oom_adj: -16
	I1205 20:32:19.434420  585929 kubeadm.go:597] duration metric: took 45.406934122s to restartPrimaryControlPlane
	I1205 20:32:19.434434  585929 kubeadm.go:394] duration metric: took 45.464483994s to StartCluster
	I1205 20:32:19.434460  585929 settings.go:142] acquiring lock: {Name:mk53b9e6d652790a330d8f10370186624dd74692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:32:19.434560  585929 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:32:19.436299  585929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:32:19.436590  585929 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.96 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:32:19.436736  585929 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 20:32:19.436837  585929 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-942599"
	I1205 20:32:19.436858  585929 config.go:182] Loaded profile config "default-k8s-diff-port-942599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:32:19.436873  585929 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-942599"
	W1205 20:32:19.436883  585929 addons.go:243] addon storage-provisioner should already be in state true
	I1205 20:32:19.436923  585929 host.go:66] Checking if "default-k8s-diff-port-942599" exists ...
	I1205 20:32:19.436938  585929 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-942599"
	I1205 20:32:19.436974  585929 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-942599"
	I1205 20:32:19.436922  585929 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-942599"
	I1205 20:32:19.437024  585929 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-942599"
	W1205 20:32:19.437051  585929 addons.go:243] addon metrics-server should already be in state true
	I1205 20:32:19.437090  585929 host.go:66] Checking if "default-k8s-diff-port-942599" exists ...
	I1205 20:32:19.437365  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.437407  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.437452  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.437480  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.437509  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.437514  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.438584  585929 out.go:177] * Verifying Kubernetes components...
	I1205 20:32:19.440376  585929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:32:19.453761  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I1205 20:32:19.453782  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44087
	I1205 20:32:19.453767  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33855
	I1205 20:32:19.454289  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.454441  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.454451  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.454851  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.454871  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.454981  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.454981  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.455005  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.455021  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.455286  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.455350  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.455409  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.455461  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetState
	I1205 20:32:19.455910  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.455927  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.455958  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.455966  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.458587  585929 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-942599"
	W1205 20:32:19.458605  585929 addons.go:243] addon default-storageclass should already be in state true
	I1205 20:32:19.458627  585929 host.go:66] Checking if "default-k8s-diff-port-942599" exists ...
	I1205 20:32:19.458955  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.458995  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.472175  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37545
	I1205 20:32:19.472667  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.472927  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37223
	I1205 20:32:19.473215  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.473233  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.473401  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40929
	I1205 20:32:19.473570  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.473608  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.473839  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.473933  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetState
	I1205 20:32:19.474155  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.474187  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.474290  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.474313  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.474546  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.474638  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.474711  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetState
	I1205 20:32:19.475267  585929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:32:19.475320  585929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:32:19.476105  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:32:19.476447  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:32:19.478117  585929 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:32:19.478117  585929 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:32:17.545165  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:20.044285  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:17.079986  585025 pod_ready.go:93] pod "coredns-7c65d6cfc9-j2hr2" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:17.080014  585025 pod_ready.go:82] duration metric: took 4.508210865s for pod "coredns-7c65d6cfc9-j2hr2" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:17.080025  585025 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:19.086070  585025 pod_ready.go:103] pod "etcd-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:20.587742  585025 pod_ready.go:93] pod "etcd-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:20.587775  585025 pod_ready.go:82] duration metric: took 3.507742173s for pod "etcd-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:20.587789  585025 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:19.479638  585929 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:32:19.479658  585929 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:32:19.479686  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:32:19.479719  585929 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:32:19.479737  585929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:32:19.479750  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:32:19.483208  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.483350  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.483773  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:32:19.483790  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.483873  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:32:19.483887  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.483936  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:32:19.484123  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:32:19.484166  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:32:19.484294  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:32:19.484324  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:32:19.484438  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:32:19.484456  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:32:19.484571  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:32:19.533651  585929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34539
	I1205 20:32:19.534273  585929 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:32:19.534802  585929 main.go:141] libmachine: Using API Version  1
	I1205 20:32:19.534833  585929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:32:19.535282  585929 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:32:19.535535  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetState
	I1205 20:32:19.538221  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .DriverName
	I1205 20:32:19.538787  585929 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:32:19.538804  585929 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:32:19.538825  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHHostname
	I1205 20:32:19.541876  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.542318  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:dd:0f", ip: ""} in network mk-default-k8s-diff-port-942599: {Iface:virbr2 ExpiryTime:2024-12-05 21:31:19 +0000 UTC Type:0 Mac:52:54:00:f6:dd:0f Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:default-k8s-diff-port-942599 Clientid:01:52:54:00:f6:dd:0f}
	I1205 20:32:19.542354  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | domain default-k8s-diff-port-942599 has defined IP address 192.168.50.96 and MAC address 52:54:00:f6:dd:0f in network mk-default-k8s-diff-port-942599
	I1205 20:32:19.542556  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHPort
	I1205 20:32:19.542744  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHKeyPath
	I1205 20:32:19.542944  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .GetSSHUsername
	I1205 20:32:19.543129  585929 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/default-k8s-diff-port-942599/id_rsa Username:docker}
	I1205 20:32:19.630282  585929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:32:19.652591  585929 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-942599" to be "Ready" ...
	I1205 20:32:19.719058  585929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:32:19.810931  585929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:32:19.812113  585929 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:32:19.812136  585929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:32:19.875725  585929 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:32:19.875761  585929 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:32:19.946353  585929 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:32:19.946390  585929 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:32:20.010445  585929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:32:20.231055  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:20.231082  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:20.231425  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:20.231454  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:20.231469  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:20.231478  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:20.231476  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Closing plugin on server side
	I1205 20:32:20.231764  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:20.231784  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:20.231783  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Closing plugin on server side
	I1205 20:32:20.247021  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:20.247051  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:20.247463  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:20.247490  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:20.247488  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Closing plugin on server side
	I1205 20:32:21.074948  585929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.263976727s)
	I1205 20:32:21.075015  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:21.075029  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:21.075397  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:21.075438  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:21.075449  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:21.075457  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:21.075745  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Closing plugin on server side
	I1205 20:32:21.075766  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:21.075785  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:21.134215  585929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.123724822s)
	I1205 20:32:21.134271  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:21.134285  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:21.134588  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:21.134604  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:21.134612  585929 main.go:141] libmachine: Making call to close driver server
	I1205 20:32:21.134615  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) DBG | Closing plugin on server side
	I1205 20:32:21.134620  585929 main.go:141] libmachine: (default-k8s-diff-port-942599) Calling .Close
	I1205 20:32:21.134878  585929 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:32:21.134891  585929 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:32:21.134904  585929 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-942599"
	I1205 20:32:21.136817  585929 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:32:17.220437  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:17.220539  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:17.272666  585602 cri.go:89] found id: ""
	I1205 20:32:17.272702  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.272716  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:17.272723  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:17.272797  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:17.314947  585602 cri.go:89] found id: ""
	I1205 20:32:17.314977  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.314989  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:17.314996  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:17.315061  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:17.354511  585602 cri.go:89] found id: ""
	I1205 20:32:17.354548  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.354561  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:17.354571  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:17.354640  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:17.393711  585602 cri.go:89] found id: ""
	I1205 20:32:17.393745  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.393759  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:17.393768  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:17.393836  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:17.434493  585602 cri.go:89] found id: ""
	I1205 20:32:17.434526  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.434535  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:17.434541  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:17.434602  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:17.476201  585602 cri.go:89] found id: ""
	I1205 20:32:17.476235  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.476245  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:17.476253  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:17.476341  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:17.516709  585602 cri.go:89] found id: ""
	I1205 20:32:17.516745  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.516755  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:17.516762  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:17.516818  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:17.557270  585602 cri.go:89] found id: ""
	I1205 20:32:17.557305  585602 logs.go:282] 0 containers: []
	W1205 20:32:17.557314  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:17.557324  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:17.557348  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:17.606494  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:17.606540  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:17.681372  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:17.681412  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:17.696778  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:17.696816  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:17.839655  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:17.839679  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:17.839717  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:20.423552  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:20.439794  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:20.439875  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:20.482820  585602 cri.go:89] found id: ""
	I1205 20:32:20.482866  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.482880  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:20.482888  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:20.482958  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:20.523590  585602 cri.go:89] found id: ""
	I1205 20:32:20.523629  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.523641  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:20.523649  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:20.523727  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:20.601603  585602 cri.go:89] found id: ""
	I1205 20:32:20.601638  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.601648  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:20.601656  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:20.601728  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:20.643927  585602 cri.go:89] found id: ""
	I1205 20:32:20.643959  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.643972  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:20.643981  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:20.644054  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:20.690935  585602 cri.go:89] found id: ""
	I1205 20:32:20.690964  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.690975  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:20.690984  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:20.691054  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:20.728367  585602 cri.go:89] found id: ""
	I1205 20:32:20.728400  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.728412  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:20.728420  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:20.728489  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:20.766529  585602 cri.go:89] found id: ""
	I1205 20:32:20.766562  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.766571  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:20.766578  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:20.766657  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:20.805641  585602 cri.go:89] found id: ""
	I1205 20:32:20.805680  585602 logs.go:282] 0 containers: []
	W1205 20:32:20.805690  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:20.805701  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:20.805718  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:20.884460  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:20.884495  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:20.884514  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:20.998367  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:20.998429  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:21.041210  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:21.041247  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:21.103519  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:21.103557  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:21.138175  585929 addons.go:510] duration metric: took 1.701453382s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:32:21.657269  585929 node_ready.go:53] node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:22.541880  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:24.543481  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:22.595422  585025 pod_ready.go:103] pod "kube-apiserver-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:23.594392  585025 pod_ready.go:93] pod "kube-apiserver-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:23.594419  585025 pod_ready.go:82] duration metric: took 3.006622534s for pod "kube-apiserver-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:23.594430  585025 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:25.601616  585025 pod_ready.go:103] pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:23.619187  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:23.633782  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:23.633872  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:23.679994  585602 cri.go:89] found id: ""
	I1205 20:32:23.680023  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.680032  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:23.680038  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:23.680094  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:23.718362  585602 cri.go:89] found id: ""
	I1205 20:32:23.718425  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.718439  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:23.718447  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:23.718520  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:23.758457  585602 cri.go:89] found id: ""
	I1205 20:32:23.758491  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.758500  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:23.758506  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:23.758558  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:23.794612  585602 cri.go:89] found id: ""
	I1205 20:32:23.794649  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.794662  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:23.794671  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:23.794738  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:23.832309  585602 cri.go:89] found id: ""
	I1205 20:32:23.832341  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.832354  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:23.832361  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:23.832421  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:23.868441  585602 cri.go:89] found id: ""
	I1205 20:32:23.868472  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.868484  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:23.868492  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:23.868573  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:23.902996  585602 cri.go:89] found id: ""
	I1205 20:32:23.903025  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.903036  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:23.903050  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:23.903115  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:23.939830  585602 cri.go:89] found id: ""
	I1205 20:32:23.939865  585602 logs.go:282] 0 containers: []
	W1205 20:32:23.939879  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:23.939892  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:23.939909  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:23.992310  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:23.992354  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:24.007378  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:24.007414  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:24.077567  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:24.077594  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:24.077608  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:24.165120  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:24.165163  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:26.711674  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:26.726923  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:26.727008  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:26.763519  585602 cri.go:89] found id: ""
	I1205 20:32:26.763554  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.763563  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:26.763570  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:26.763628  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:26.802600  585602 cri.go:89] found id: ""
	I1205 20:32:26.802635  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.802644  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:26.802650  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:26.802705  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:26.839920  585602 cri.go:89] found id: ""
	I1205 20:32:26.839967  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.839981  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:26.839989  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:26.840076  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:24.157515  585929 node_ready.go:53] node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:26.657197  585929 node_ready.go:53] node "default-k8s-diff-port-942599" has status "Ready":"False"
	I1205 20:32:27.656811  585929 node_ready.go:49] node "default-k8s-diff-port-942599" has status "Ready":"True"
	I1205 20:32:27.656842  585929 node_ready.go:38] duration metric: took 8.004215314s for node "default-k8s-diff-port-942599" to be "Ready" ...
	I1205 20:32:27.656854  585929 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:32:27.662792  585929 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.668485  585929 pod_ready.go:93] pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:27.668510  585929 pod_ready.go:82] duration metric: took 5.690516ms for pod "coredns-7c65d6cfc9-5drgc" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.668521  585929 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:26.543536  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:28.544214  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:27.101514  585025 pod_ready.go:93] pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:27.101540  585025 pod_ready.go:82] duration metric: took 3.507102769s for pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.101551  585025 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rjp4j" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.108084  585025 pod_ready.go:93] pod "kube-proxy-rjp4j" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:27.108116  585025 pod_ready.go:82] duration metric: took 6.557141ms for pod "kube-proxy-rjp4j" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.108131  585025 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.112915  585025 pod_ready.go:93] pod "kube-scheduler-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:27.112942  585025 pod_ready.go:82] duration metric: took 4.801285ms for pod "kube-scheduler-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:27.112955  585025 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.119094  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:26.876377  585602 cri.go:89] found id: ""
	I1205 20:32:26.876406  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.876416  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:26.876422  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:26.876491  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:26.913817  585602 cri.go:89] found id: ""
	I1205 20:32:26.913845  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.913854  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:26.913862  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:26.913936  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:26.955739  585602 cri.go:89] found id: ""
	I1205 20:32:26.955775  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.955788  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:26.955798  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:26.955863  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:26.996191  585602 cri.go:89] found id: ""
	I1205 20:32:26.996223  585602 logs.go:282] 0 containers: []
	W1205 20:32:26.996234  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:26.996242  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:26.996341  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:27.040905  585602 cri.go:89] found id: ""
	I1205 20:32:27.040935  585602 logs.go:282] 0 containers: []
	W1205 20:32:27.040947  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:27.040958  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:27.040973  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:27.098103  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:27.098140  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:27.116538  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:27.116574  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:27.204154  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:27.204187  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:27.204208  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:27.300380  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:27.300431  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:29.840944  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:29.855784  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:29.855869  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:29.893728  585602 cri.go:89] found id: ""
	I1205 20:32:29.893765  585602 logs.go:282] 0 containers: []
	W1205 20:32:29.893777  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:29.893786  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:29.893867  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:29.930138  585602 cri.go:89] found id: ""
	I1205 20:32:29.930176  585602 logs.go:282] 0 containers: []
	W1205 20:32:29.930186  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:29.930193  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:29.930248  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:29.966340  585602 cri.go:89] found id: ""
	I1205 20:32:29.966371  585602 logs.go:282] 0 containers: []
	W1205 20:32:29.966380  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:29.966387  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:29.966463  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:30.003868  585602 cri.go:89] found id: ""
	I1205 20:32:30.003900  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.003920  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:30.003928  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:30.004001  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:30.044332  585602 cri.go:89] found id: ""
	I1205 20:32:30.044363  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.044373  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:30.044380  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:30.044445  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:30.088044  585602 cri.go:89] found id: ""
	I1205 20:32:30.088085  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.088098  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:30.088106  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:30.088173  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:30.124221  585602 cri.go:89] found id: ""
	I1205 20:32:30.124248  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.124258  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:30.124285  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:30.124357  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:30.162092  585602 cri.go:89] found id: ""
	I1205 20:32:30.162121  585602 logs.go:282] 0 containers: []
	W1205 20:32:30.162133  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:30.162146  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:30.162162  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:30.218526  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:30.218567  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:30.232240  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:30.232292  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:30.308228  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:30.308260  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:30.308296  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:30.389348  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:30.389391  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:29.177093  585929 pod_ready.go:93] pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:29.177118  585929 pod_ready.go:82] duration metric: took 1.508590352s for pod "etcd-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.177129  585929 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.185839  585929 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:29.185869  585929 pod_ready.go:82] duration metric: took 8.733028ms for pod "kube-apiserver-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.185883  585929 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.191924  585929 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:29.191950  585929 pod_ready.go:82] duration metric: took 6.059525ms for pod "kube-controller-manager-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.191963  585929 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5vdcq" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.256484  585929 pod_ready.go:93] pod "kube-proxy-5vdcq" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:29.256510  585929 pod_ready.go:82] duration metric: took 64.540117ms for pod "kube-proxy-5vdcq" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.256521  585929 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.656933  585929 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace has status "Ready":"True"
	I1205 20:32:29.656961  585929 pod_ready.go:82] duration metric: took 400.432279ms for pod "kube-scheduler-default-k8s-diff-port-942599" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:29.656972  585929 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace to be "Ready" ...
	I1205 20:32:31.664326  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:31.043630  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:33.044035  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:35.542861  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:31.120200  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:33.120303  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:35.120532  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:32.934497  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:32.949404  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:32.949488  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:33.006117  585602 cri.go:89] found id: ""
	I1205 20:32:33.006148  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.006157  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:33.006163  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:33.006231  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:33.064907  585602 cri.go:89] found id: ""
	I1205 20:32:33.064945  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.064958  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:33.064966  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:33.065031  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:33.101268  585602 cri.go:89] found id: ""
	I1205 20:32:33.101295  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.101304  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:33.101310  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:33.101378  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:33.141705  585602 cri.go:89] found id: ""
	I1205 20:32:33.141733  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.141743  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:33.141750  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:33.141810  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:33.180983  585602 cri.go:89] found id: ""
	I1205 20:32:33.181011  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.181020  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:33.181026  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:33.181086  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:33.220742  585602 cri.go:89] found id: ""
	I1205 20:32:33.220779  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.220791  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:33.220799  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:33.220871  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:33.255980  585602 cri.go:89] found id: ""
	I1205 20:32:33.256009  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.256017  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:33.256024  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:33.256080  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:33.292978  585602 cri.go:89] found id: ""
	I1205 20:32:33.293005  585602 logs.go:282] 0 containers: []
	W1205 20:32:33.293013  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:33.293023  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:33.293034  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:33.347167  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:33.347213  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:33.361367  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:33.361408  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:33.435871  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:33.435915  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:33.435932  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:33.518835  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:33.518880  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:36.066359  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:36.080867  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:36.080947  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:36.117647  585602 cri.go:89] found id: ""
	I1205 20:32:36.117678  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.117689  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:36.117697  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:36.117763  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:36.154376  585602 cri.go:89] found id: ""
	I1205 20:32:36.154412  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.154428  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:36.154436  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:36.154498  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:36.193225  585602 cri.go:89] found id: ""
	I1205 20:32:36.193261  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.193274  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:36.193282  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:36.193347  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:36.230717  585602 cri.go:89] found id: ""
	I1205 20:32:36.230748  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.230758  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:36.230764  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:36.230817  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:36.270186  585602 cri.go:89] found id: ""
	I1205 20:32:36.270238  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.270252  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:36.270262  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:36.270340  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:36.306378  585602 cri.go:89] found id: ""
	I1205 20:32:36.306425  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.306438  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:36.306447  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:36.306531  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:36.342256  585602 cri.go:89] found id: ""
	I1205 20:32:36.342289  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.342300  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:36.342306  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:36.342380  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:36.380684  585602 cri.go:89] found id: ""
	I1205 20:32:36.380718  585602 logs.go:282] 0 containers: []
	W1205 20:32:36.380732  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:36.380745  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:36.380768  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:36.436066  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:36.436109  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:36.450255  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:36.450285  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:36.521857  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:36.521883  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:36.521897  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:36.608349  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:36.608395  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:34.163870  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:36.164890  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:38.042889  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:40.543140  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:37.619863  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:40.120462  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:39.157366  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:39.171267  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:39.171357  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:39.214459  585602 cri.go:89] found id: ""
	I1205 20:32:39.214490  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.214520  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:39.214528  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:39.214583  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:39.250312  585602 cri.go:89] found id: ""
	I1205 20:32:39.250352  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.250366  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:39.250375  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:39.250437  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:39.286891  585602 cri.go:89] found id: ""
	I1205 20:32:39.286932  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.286944  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:39.286952  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:39.287019  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:39.323923  585602 cri.go:89] found id: ""
	I1205 20:32:39.323958  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.323970  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:39.323979  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:39.324053  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:39.360280  585602 cri.go:89] found id: ""
	I1205 20:32:39.360322  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.360331  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:39.360337  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:39.360403  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:39.397599  585602 cri.go:89] found id: ""
	I1205 20:32:39.397637  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.397650  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:39.397659  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:39.397731  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:39.435132  585602 cri.go:89] found id: ""
	I1205 20:32:39.435159  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.435168  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:39.435174  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:39.435241  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:39.470653  585602 cri.go:89] found id: ""
	I1205 20:32:39.470682  585602 logs.go:282] 0 containers: []
	W1205 20:32:39.470690  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:39.470700  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:39.470714  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:39.511382  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:39.511413  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:39.563955  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:39.563994  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:39.578015  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:39.578044  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:39.658505  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:39.658535  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:39.658550  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:38.665320  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:41.165054  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:42.545231  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:45.042231  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:42.620687  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:45.120915  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:42.248607  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:42.263605  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:42.263688  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:42.305480  585602 cri.go:89] found id: ""
	I1205 20:32:42.305508  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.305519  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:42.305527  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:42.305595  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:42.339969  585602 cri.go:89] found id: ""
	I1205 20:32:42.340001  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.340010  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:42.340016  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:42.340090  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:42.381594  585602 cri.go:89] found id: ""
	I1205 20:32:42.381630  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.381643  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:42.381651  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:42.381771  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:42.435039  585602 cri.go:89] found id: ""
	I1205 20:32:42.435072  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.435085  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:42.435093  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:42.435162  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:42.470567  585602 cri.go:89] found id: ""
	I1205 20:32:42.470595  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.470604  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:42.470610  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:42.470674  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:42.510695  585602 cri.go:89] found id: ""
	I1205 20:32:42.510723  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.510731  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:42.510738  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:42.510793  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:42.547687  585602 cri.go:89] found id: ""
	I1205 20:32:42.547711  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.547718  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:42.547735  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:42.547784  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:42.587160  585602 cri.go:89] found id: ""
	I1205 20:32:42.587191  585602 logs.go:282] 0 containers: []
	W1205 20:32:42.587199  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:42.587211  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:42.587225  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:42.669543  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:42.669587  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:42.717795  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:42.717833  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:42.772644  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:42.772696  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:42.788443  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:42.788480  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:42.861560  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:45.362758  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:45.377178  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:45.377266  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:45.413055  585602 cri.go:89] found id: ""
	I1205 20:32:45.413088  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.413102  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:45.413111  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:45.413176  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:45.453769  585602 cri.go:89] found id: ""
	I1205 20:32:45.453799  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.453808  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:45.453813  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:45.453879  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:45.499481  585602 cri.go:89] found id: ""
	I1205 20:32:45.499511  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.499522  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:45.499531  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:45.499598  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:45.537603  585602 cri.go:89] found id: ""
	I1205 20:32:45.537638  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.537647  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:45.537653  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:45.537707  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:45.572430  585602 cri.go:89] found id: ""
	I1205 20:32:45.572463  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.572471  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:45.572479  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:45.572556  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:45.610349  585602 cri.go:89] found id: ""
	I1205 20:32:45.610387  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.610398  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:45.610406  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:45.610476  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:45.649983  585602 cri.go:89] found id: ""
	I1205 20:32:45.650018  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.650031  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:45.650038  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:45.650113  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:45.689068  585602 cri.go:89] found id: ""
	I1205 20:32:45.689099  585602 logs.go:282] 0 containers: []
	W1205 20:32:45.689107  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:45.689118  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:45.689131  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:45.743715  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:45.743758  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:45.759803  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:45.759834  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:45.835107  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:45.835133  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:45.835146  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:45.914590  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:45.914632  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:43.665616  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:46.164064  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:47.045269  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:49.544519  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:47.619099  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:49.627948  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:48.456633  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:48.475011  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:48.475086  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:48.512878  585602 cri.go:89] found id: ""
	I1205 20:32:48.512913  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.512925  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:48.512933  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:48.513002  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:48.551708  585602 cri.go:89] found id: ""
	I1205 20:32:48.551737  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.551744  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:48.551751  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:48.551805  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:48.590765  585602 cri.go:89] found id: ""
	I1205 20:32:48.590791  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.590800  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:48.590806  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:48.590859  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:48.629447  585602 cri.go:89] found id: ""
	I1205 20:32:48.629473  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.629481  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:48.629487  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:48.629540  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:48.667299  585602 cri.go:89] found id: ""
	I1205 20:32:48.667329  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.667339  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:48.667347  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:48.667414  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:48.703771  585602 cri.go:89] found id: ""
	I1205 20:32:48.703816  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.703830  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:48.703841  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:48.703911  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:48.747064  585602 cri.go:89] found id: ""
	I1205 20:32:48.747098  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.747111  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:48.747118  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:48.747186  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:48.786608  585602 cri.go:89] found id: ""
	I1205 20:32:48.786649  585602 logs.go:282] 0 containers: []
	W1205 20:32:48.786663  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:48.786684  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:48.786700  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:48.860834  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:48.860866  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:48.860881  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:48.944029  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:48.944082  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:48.982249  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:48.982284  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:49.036460  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:49.036509  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:51.556456  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:51.571498  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:51.571590  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:51.616890  585602 cri.go:89] found id: ""
	I1205 20:32:51.616924  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.616934  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:51.616942  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:51.617008  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:51.660397  585602 cri.go:89] found id: ""
	I1205 20:32:51.660433  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.660445  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:51.660453  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:51.660543  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:51.698943  585602 cri.go:89] found id: ""
	I1205 20:32:51.698973  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.698981  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:51.698988  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:51.699041  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:51.737254  585602 cri.go:89] found id: ""
	I1205 20:32:51.737288  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.737297  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:51.737310  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:51.737366  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:51.775560  585602 cri.go:89] found id: ""
	I1205 20:32:51.775592  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.775600  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:51.775606  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:51.775681  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:51.814314  585602 cri.go:89] found id: ""
	I1205 20:32:51.814370  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.814383  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:51.814393  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:51.814464  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:51.849873  585602 cri.go:89] found id: ""
	I1205 20:32:51.849913  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.849935  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:51.849944  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:51.850018  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:48.164562  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:50.664498  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:52.044224  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:54.542721  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:52.118857  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:54.120231  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:51.891360  585602 cri.go:89] found id: ""
	I1205 20:32:51.891388  585602 logs.go:282] 0 containers: []
	W1205 20:32:51.891400  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:51.891412  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:51.891429  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:51.943812  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:51.943854  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:51.959119  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:51.959152  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:52.036014  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:52.036040  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:52.036059  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:52.114080  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:52.114122  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:54.657243  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:54.672319  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:54.672407  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:54.708446  585602 cri.go:89] found id: ""
	I1205 20:32:54.708475  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.708484  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:54.708491  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:54.708569  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:54.747309  585602 cri.go:89] found id: ""
	I1205 20:32:54.747347  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.747359  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:54.747370  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:54.747451  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:54.790742  585602 cri.go:89] found id: ""
	I1205 20:32:54.790772  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.790781  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:54.790787  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:54.790853  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:54.828857  585602 cri.go:89] found id: ""
	I1205 20:32:54.828885  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.828894  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:54.828902  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:54.828964  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:54.867691  585602 cri.go:89] found id: ""
	I1205 20:32:54.867729  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.867740  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:54.867747  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:54.867819  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:54.907216  585602 cri.go:89] found id: ""
	I1205 20:32:54.907242  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.907249  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:54.907256  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:54.907308  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:54.945800  585602 cri.go:89] found id: ""
	I1205 20:32:54.945827  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.945837  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:54.945844  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:54.945895  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:54.993176  585602 cri.go:89] found id: ""
	I1205 20:32:54.993216  585602 logs.go:282] 0 containers: []
	W1205 20:32:54.993228  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:54.993242  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:54.993258  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:55.045797  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:55.045835  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:55.060103  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:55.060136  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:55.129440  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:55.129467  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:55.129485  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:55.214949  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:55.214999  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:53.164619  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:55.663605  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:56.543148  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:58.543374  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:00.543687  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:56.620220  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:58.620759  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:00.626643  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:32:57.755086  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:32:57.769533  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:32:57.769622  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:32:57.807812  585602 cri.go:89] found id: ""
	I1205 20:32:57.807847  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.807858  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:32:57.807869  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:32:57.807941  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:32:57.846179  585602 cri.go:89] found id: ""
	I1205 20:32:57.846209  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.846223  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:32:57.846232  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:32:57.846305  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:32:57.881438  585602 cri.go:89] found id: ""
	I1205 20:32:57.881473  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.881482  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:32:57.881496  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:32:57.881553  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:32:57.918242  585602 cri.go:89] found id: ""
	I1205 20:32:57.918283  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.918294  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:32:57.918302  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:32:57.918378  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:32:57.962825  585602 cri.go:89] found id: ""
	I1205 20:32:57.962863  585602 logs.go:282] 0 containers: []
	W1205 20:32:57.962873  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:32:57.962879  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:32:57.962955  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:32:58.004655  585602 cri.go:89] found id: ""
	I1205 20:32:58.004699  585602 logs.go:282] 0 containers: []
	W1205 20:32:58.004711  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:32:58.004731  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:32:58.004802  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:32:58.043701  585602 cri.go:89] found id: ""
	I1205 20:32:58.043730  585602 logs.go:282] 0 containers: []
	W1205 20:32:58.043738  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:32:58.043744  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:32:58.043802  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:32:58.081400  585602 cri.go:89] found id: ""
	I1205 20:32:58.081437  585602 logs.go:282] 0 containers: []
	W1205 20:32:58.081450  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:32:58.081463  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:32:58.081486  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:32:58.135531  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:32:58.135573  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:32:58.149962  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:32:58.149998  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:32:58.227810  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:32:58.227834  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:32:58.227849  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:32:58.308173  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:32:58.308219  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:00.848019  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:00.863423  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:00.863496  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:00.902526  585602 cri.go:89] found id: ""
	I1205 20:33:00.902553  585602 logs.go:282] 0 containers: []
	W1205 20:33:00.902561  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:00.902567  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:00.902621  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:00.939891  585602 cri.go:89] found id: ""
	I1205 20:33:00.939932  585602 logs.go:282] 0 containers: []
	W1205 20:33:00.939942  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:00.939948  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:00.940022  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:00.981645  585602 cri.go:89] found id: ""
	I1205 20:33:00.981676  585602 logs.go:282] 0 containers: []
	W1205 20:33:00.981684  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:00.981691  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:00.981745  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:01.027753  585602 cri.go:89] found id: ""
	I1205 20:33:01.027780  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.027789  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:01.027795  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:01.027877  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:01.064529  585602 cri.go:89] found id: ""
	I1205 20:33:01.064559  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.064567  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:01.064574  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:01.064628  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:01.102239  585602 cri.go:89] found id: ""
	I1205 20:33:01.102272  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.102281  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:01.102287  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:01.102357  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:01.139723  585602 cri.go:89] found id: ""
	I1205 20:33:01.139760  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.139770  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:01.139778  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:01.139845  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:01.176172  585602 cri.go:89] found id: ""
	I1205 20:33:01.176198  585602 logs.go:282] 0 containers: []
	W1205 20:33:01.176207  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:01.176216  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:01.176231  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:01.230085  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:01.230133  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:01.245574  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:01.245617  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:01.340483  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:01.340520  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:01.340537  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:01.416925  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:01.416972  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:32:58.164852  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:00.664376  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:02.677134  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:03.042415  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:05.543101  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:03.119783  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:05.120647  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:03.958855  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:03.974024  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:03.974096  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:04.021407  585602 cri.go:89] found id: ""
	I1205 20:33:04.021442  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.021451  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:04.021458  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:04.021523  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:04.063385  585602 cri.go:89] found id: ""
	I1205 20:33:04.063414  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.063423  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:04.063430  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:04.063488  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:04.103693  585602 cri.go:89] found id: ""
	I1205 20:33:04.103735  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.103747  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:04.103756  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:04.103815  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:04.143041  585602 cri.go:89] found id: ""
	I1205 20:33:04.143072  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.143100  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:04.143109  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:04.143179  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:04.180668  585602 cri.go:89] found id: ""
	I1205 20:33:04.180702  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.180712  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:04.180718  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:04.180778  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:04.221848  585602 cri.go:89] found id: ""
	I1205 20:33:04.221885  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.221894  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:04.221901  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:04.222018  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:04.263976  585602 cri.go:89] found id: ""
	I1205 20:33:04.264014  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.264024  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:04.264030  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:04.264097  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:04.298698  585602 cri.go:89] found id: ""
	I1205 20:33:04.298726  585602 logs.go:282] 0 containers: []
	W1205 20:33:04.298737  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:04.298751  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:04.298767  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:04.347604  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:04.347659  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:04.361325  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:04.361361  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:04.437679  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:04.437704  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:04.437720  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:04.520043  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:04.520103  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:05.163317  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:07.165936  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:08.043365  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:10.544442  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:07.122134  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:09.620228  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:07.070687  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:07.085290  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:07.085367  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:07.126233  585602 cri.go:89] found id: ""
	I1205 20:33:07.126265  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.126276  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:07.126285  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:07.126346  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:07.163004  585602 cri.go:89] found id: ""
	I1205 20:33:07.163040  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.163053  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:07.163061  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:07.163126  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:07.201372  585602 cri.go:89] found id: ""
	I1205 20:33:07.201412  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.201425  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:07.201435  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:07.201509  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:07.237762  585602 cri.go:89] found id: ""
	I1205 20:33:07.237795  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.237807  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:07.237815  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:07.237885  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:07.273940  585602 cri.go:89] found id: ""
	I1205 20:33:07.273976  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.273985  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:07.273995  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:07.274057  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:07.311028  585602 cri.go:89] found id: ""
	I1205 20:33:07.311061  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.311070  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:07.311076  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:07.311131  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:07.347386  585602 cri.go:89] found id: ""
	I1205 20:33:07.347422  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.347433  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:07.347441  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:07.347503  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:07.386412  585602 cri.go:89] found id: ""
	I1205 20:33:07.386446  585602 logs.go:282] 0 containers: []
	W1205 20:33:07.386458  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:07.386471  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:07.386489  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:07.430250  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:07.430280  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:07.483936  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:07.483982  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:07.498201  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:07.498236  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:07.576741  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:07.576767  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:07.576780  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:10.164792  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:10.178516  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:10.178596  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:10.215658  585602 cri.go:89] found id: ""
	I1205 20:33:10.215692  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.215702  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:10.215711  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:10.215779  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:10.251632  585602 cri.go:89] found id: ""
	I1205 20:33:10.251671  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.251683  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:10.251691  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:10.251763  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:10.295403  585602 cri.go:89] found id: ""
	I1205 20:33:10.295435  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.295453  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:10.295460  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:10.295513  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:10.329747  585602 cri.go:89] found id: ""
	I1205 20:33:10.329778  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.329787  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:10.329793  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:10.329871  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:10.369975  585602 cri.go:89] found id: ""
	I1205 20:33:10.370016  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.370028  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:10.370036  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:10.370104  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:10.408146  585602 cri.go:89] found id: ""
	I1205 20:33:10.408183  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.408196  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:10.408204  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:10.408288  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:10.443803  585602 cri.go:89] found id: ""
	I1205 20:33:10.443839  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.443850  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:10.443858  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:10.443932  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:10.481784  585602 cri.go:89] found id: ""
	I1205 20:33:10.481826  585602 logs.go:282] 0 containers: []
	W1205 20:33:10.481840  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:10.481854  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:10.481872  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:10.531449  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:10.531498  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:10.549258  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:10.549288  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:10.620162  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:10.620189  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:10.620206  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:10.704656  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:10.704706  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:09.663940  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:12.163534  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:13.043720  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:15.542736  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:12.118781  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:14.619996  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:13.251518  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:13.264731  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:13.264815  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:13.297816  585602 cri.go:89] found id: ""
	I1205 20:33:13.297846  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.297855  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:13.297861  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:13.297918  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:13.330696  585602 cri.go:89] found id: ""
	I1205 20:33:13.330724  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.330732  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:13.330738  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:13.330789  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:13.366257  585602 cri.go:89] found id: ""
	I1205 20:33:13.366304  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.366315  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:13.366321  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:13.366385  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:13.403994  585602 cri.go:89] found id: ""
	I1205 20:33:13.404030  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.404042  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:13.404051  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:13.404121  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:13.450160  585602 cri.go:89] found id: ""
	I1205 20:33:13.450189  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.450198  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:13.450205  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:13.450262  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:13.502593  585602 cri.go:89] found id: ""
	I1205 20:33:13.502629  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.502640  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:13.502650  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:13.502720  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:13.548051  585602 cri.go:89] found id: ""
	I1205 20:33:13.548084  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.548095  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:13.548103  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:13.548166  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:13.593913  585602 cri.go:89] found id: ""
	I1205 20:33:13.593947  585602 logs.go:282] 0 containers: []
	W1205 20:33:13.593960  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:13.593975  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:13.593997  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:13.674597  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:13.674628  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:13.674647  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:13.760747  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:13.760796  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:13.804351  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:13.804383  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:13.856896  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:13.856958  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:16.372754  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:16.387165  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:16.387242  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:16.426612  585602 cri.go:89] found id: ""
	I1205 20:33:16.426655  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.426668  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:16.426676  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:16.426734  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:16.461936  585602 cri.go:89] found id: ""
	I1205 20:33:16.461974  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.461988  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:16.461997  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:16.462060  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:16.498010  585602 cri.go:89] found id: ""
	I1205 20:33:16.498044  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.498062  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:16.498069  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:16.498133  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:16.533825  585602 cri.go:89] found id: ""
	I1205 20:33:16.533854  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.533863  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:16.533869  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:16.533941  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:16.570834  585602 cri.go:89] found id: ""
	I1205 20:33:16.570875  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.570887  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:16.570896  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:16.570968  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:16.605988  585602 cri.go:89] found id: ""
	I1205 20:33:16.606026  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.606038  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:16.606047  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:16.606140  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:16.645148  585602 cri.go:89] found id: ""
	I1205 20:33:16.645178  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.645188  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:16.645195  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:16.645261  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:16.682449  585602 cri.go:89] found id: ""
	I1205 20:33:16.682479  585602 logs.go:282] 0 containers: []
	W1205 20:33:16.682491  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:16.682502  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:16.682519  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:16.696944  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:16.696980  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:16.777034  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:16.777064  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:16.777078  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:14.164550  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:16.664527  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:17.543278  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:19.543404  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:16.621517  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:18.626303  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:16.854812  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:16.854880  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:16.905101  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:16.905131  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:19.463427  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:19.477135  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:19.477233  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:19.529213  585602 cri.go:89] found id: ""
	I1205 20:33:19.529248  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.529264  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:19.529274  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:19.529359  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:19.575419  585602 cri.go:89] found id: ""
	I1205 20:33:19.575453  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.575465  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:19.575474  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:19.575546  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:19.616657  585602 cri.go:89] found id: ""
	I1205 20:33:19.616691  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.616704  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:19.616713  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:19.616787  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:19.653142  585602 cri.go:89] found id: ""
	I1205 20:33:19.653177  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.653189  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:19.653198  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:19.653267  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:19.690504  585602 cri.go:89] found id: ""
	I1205 20:33:19.690544  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.690555  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:19.690563  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:19.690635  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:19.730202  585602 cri.go:89] found id: ""
	I1205 20:33:19.730229  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.730237  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:19.730245  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:19.730302  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:19.767212  585602 cri.go:89] found id: ""
	I1205 20:33:19.767243  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.767255  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:19.767264  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:19.767336  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:19.803089  585602 cri.go:89] found id: ""
	I1205 20:33:19.803125  585602 logs.go:282] 0 containers: []
	W1205 20:33:19.803137  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:19.803163  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:19.803180  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:19.884542  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:19.884589  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:19.925257  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:19.925303  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:19.980457  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:19.980510  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:19.997026  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:19.997057  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:20.075062  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:18.664915  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:21.163064  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:22.042272  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:24.043822  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:21.120054  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:23.120944  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:25.618857  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:22.575469  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:22.588686  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:22.588768  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:22.622824  585602 cri.go:89] found id: ""
	I1205 20:33:22.622860  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.622868  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:22.622874  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:22.622931  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:22.659964  585602 cri.go:89] found id: ""
	I1205 20:33:22.660059  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.660074  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:22.660085  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:22.660153  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:22.695289  585602 cri.go:89] found id: ""
	I1205 20:33:22.695325  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.695337  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:22.695345  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:22.695417  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:22.734766  585602 cri.go:89] found id: ""
	I1205 20:33:22.734801  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.734813  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:22.734821  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:22.734896  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:22.773778  585602 cri.go:89] found id: ""
	I1205 20:33:22.773806  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.773818  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:22.773826  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:22.773899  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:22.811468  585602 cri.go:89] found id: ""
	I1205 20:33:22.811503  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.811514  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:22.811521  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:22.811591  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:22.852153  585602 cri.go:89] found id: ""
	I1205 20:33:22.852210  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.852221  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:22.852227  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:22.852318  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:22.888091  585602 cri.go:89] found id: ""
	I1205 20:33:22.888120  585602 logs.go:282] 0 containers: []
	W1205 20:33:22.888129  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:22.888139  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:22.888155  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:22.943210  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:22.943252  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:22.958356  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:22.958393  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:23.026732  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:23.026770  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:23.026788  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:23.106356  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:23.106395  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:25.650832  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:25.665392  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:25.665475  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:25.701109  585602 cri.go:89] found id: ""
	I1205 20:33:25.701146  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.701155  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:25.701162  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:25.701231  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:25.738075  585602 cri.go:89] found id: ""
	I1205 20:33:25.738108  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.738117  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:25.738123  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:25.738176  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:25.775031  585602 cri.go:89] found id: ""
	I1205 20:33:25.775078  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.775090  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:25.775100  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:25.775173  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:25.811343  585602 cri.go:89] found id: ""
	I1205 20:33:25.811376  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.811386  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:25.811395  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:25.811471  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:25.846635  585602 cri.go:89] found id: ""
	I1205 20:33:25.846674  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.846684  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:25.846692  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:25.846766  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:25.881103  585602 cri.go:89] found id: ""
	I1205 20:33:25.881136  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.881145  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:25.881151  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:25.881224  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:25.917809  585602 cri.go:89] found id: ""
	I1205 20:33:25.917844  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.917855  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:25.917864  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:25.917936  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:25.955219  585602 cri.go:89] found id: ""
	I1205 20:33:25.955245  585602 logs.go:282] 0 containers: []
	W1205 20:33:25.955254  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:25.955264  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:25.955276  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:26.007016  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:26.007059  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:26.021554  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:26.021601  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:26.099290  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:26.099321  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:26.099334  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:26.182955  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:26.182993  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:23.164876  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:25.665151  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:26.542519  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:28.542856  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:30.542941  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:27.621687  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:30.119140  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:28.725201  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:28.739515  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:28.739602  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:28.778187  585602 cri.go:89] found id: ""
	I1205 20:33:28.778230  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.778242  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:28.778249  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:28.778315  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:28.815788  585602 cri.go:89] found id: ""
	I1205 20:33:28.815826  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.815838  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:28.815845  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:28.815912  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:28.852222  585602 cri.go:89] found id: ""
	I1205 20:33:28.852251  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.852261  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:28.852289  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:28.852362  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:28.889742  585602 cri.go:89] found id: ""
	I1205 20:33:28.889776  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.889787  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:28.889794  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:28.889859  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:28.926872  585602 cri.go:89] found id: ""
	I1205 20:33:28.926903  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.926912  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:28.926919  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:28.926972  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:28.963380  585602 cri.go:89] found id: ""
	I1205 20:33:28.963418  585602 logs.go:282] 0 containers: []
	W1205 20:33:28.963432  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:28.963441  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:28.963509  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:29.000711  585602 cri.go:89] found id: ""
	I1205 20:33:29.000746  585602 logs.go:282] 0 containers: []
	W1205 20:33:29.000764  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:29.000772  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:29.000848  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:29.035934  585602 cri.go:89] found id: ""
	I1205 20:33:29.035963  585602 logs.go:282] 0 containers: []
	W1205 20:33:29.035974  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:29.035987  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:29.036003  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:29.091336  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:29.091382  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:29.105784  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:29.105814  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:29.182038  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:29.182078  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:29.182095  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:29.261107  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:29.261153  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:31.802911  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:31.817285  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:31.817369  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:28.164470  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:30.664154  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:33.043654  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:35.044730  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:32.120759  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:34.619618  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:31.854865  585602 cri.go:89] found id: ""
	I1205 20:33:31.854900  585602 logs.go:282] 0 containers: []
	W1205 20:33:31.854914  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:31.854922  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:31.854995  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:31.893928  585602 cri.go:89] found id: ""
	I1205 20:33:31.893964  585602 logs.go:282] 0 containers: []
	W1205 20:33:31.893977  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:31.893984  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:31.894053  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:31.929490  585602 cri.go:89] found id: ""
	I1205 20:33:31.929527  585602 logs.go:282] 0 containers: []
	W1205 20:33:31.929540  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:31.929548  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:31.929637  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:31.964185  585602 cri.go:89] found id: ""
	I1205 20:33:31.964211  585602 logs.go:282] 0 containers: []
	W1205 20:33:31.964219  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:31.964225  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:31.964291  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:32.002708  585602 cri.go:89] found id: ""
	I1205 20:33:32.002748  585602 logs.go:282] 0 containers: []
	W1205 20:33:32.002760  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:32.002768  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:32.002847  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:32.040619  585602 cri.go:89] found id: ""
	I1205 20:33:32.040712  585602 logs.go:282] 0 containers: []
	W1205 20:33:32.040740  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:32.040758  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:32.040839  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:32.079352  585602 cri.go:89] found id: ""
	I1205 20:33:32.079390  585602 logs.go:282] 0 containers: []
	W1205 20:33:32.079404  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:32.079412  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:32.079484  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:32.117560  585602 cri.go:89] found id: ""
	I1205 20:33:32.117596  585602 logs.go:282] 0 containers: []
	W1205 20:33:32.117608  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:32.117629  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:32.117653  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:32.172639  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:32.172686  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:32.187687  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:32.187727  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:32.265000  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:32.265034  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:32.265051  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:32.348128  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:32.348176  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:34.890144  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:34.903953  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:34.904032  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:34.939343  585602 cri.go:89] found id: ""
	I1205 20:33:34.939374  585602 logs.go:282] 0 containers: []
	W1205 20:33:34.939383  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:34.939389  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:34.939444  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:34.978225  585602 cri.go:89] found id: ""
	I1205 20:33:34.978266  585602 logs.go:282] 0 containers: []
	W1205 20:33:34.978278  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:34.978286  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:34.978363  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:35.015918  585602 cri.go:89] found id: ""
	I1205 20:33:35.015950  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.015960  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:35.015966  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:35.016032  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:35.053222  585602 cri.go:89] found id: ""
	I1205 20:33:35.053249  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.053257  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:35.053264  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:35.053320  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:35.088369  585602 cri.go:89] found id: ""
	I1205 20:33:35.088401  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.088412  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:35.088421  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:35.088498  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:35.135290  585602 cri.go:89] found id: ""
	I1205 20:33:35.135327  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.135338  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:35.135346  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:35.135412  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:35.174959  585602 cri.go:89] found id: ""
	I1205 20:33:35.174996  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.175008  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:35.175017  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:35.175097  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:35.215101  585602 cri.go:89] found id: ""
	I1205 20:33:35.215134  585602 logs.go:282] 0 containers: []
	W1205 20:33:35.215143  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:35.215152  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:35.215167  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:35.269372  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:35.269414  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:35.285745  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:35.285776  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:35.364774  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:35.364807  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:35.364824  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:35.445932  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:35.445980  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:33.163790  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:35.163966  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:37.164819  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:37.047128  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:39.543051  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:36.620450  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:39.120055  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:37.996837  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:38.010545  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:38.010612  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:38.048292  585602 cri.go:89] found id: ""
	I1205 20:33:38.048334  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.048350  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:38.048360  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:38.048429  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:38.086877  585602 cri.go:89] found id: ""
	I1205 20:33:38.086911  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.086921  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:38.086927  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:38.087001  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:38.122968  585602 cri.go:89] found id: ""
	I1205 20:33:38.122999  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.123010  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:38.123018  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:38.123082  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:38.164901  585602 cri.go:89] found id: ""
	I1205 20:33:38.164940  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.164949  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:38.164955  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:38.165006  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:38.200697  585602 cri.go:89] found id: ""
	I1205 20:33:38.200725  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.200734  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:38.200740  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:38.200803  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:38.240306  585602 cri.go:89] found id: ""
	I1205 20:33:38.240338  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.240347  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:38.240354  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:38.240424  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:38.275788  585602 cri.go:89] found id: ""
	I1205 20:33:38.275823  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.275835  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:38.275844  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:38.275917  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:38.311431  585602 cri.go:89] found id: ""
	I1205 20:33:38.311468  585602 logs.go:282] 0 containers: []
	W1205 20:33:38.311480  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:38.311493  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:38.311507  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:38.361472  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:38.361515  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:38.375970  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:38.376004  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:38.450913  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:38.450941  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:38.450961  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:38.527620  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:38.527666  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:41.072438  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:41.086085  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:41.086168  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:41.123822  585602 cri.go:89] found id: ""
	I1205 20:33:41.123852  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.123861  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:41.123868  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:41.123919  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:41.160343  585602 cri.go:89] found id: ""
	I1205 20:33:41.160371  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.160380  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:41.160389  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:41.160457  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:41.198212  585602 cri.go:89] found id: ""
	I1205 20:33:41.198240  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.198249  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:41.198255  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:41.198309  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:41.233793  585602 cri.go:89] found id: ""
	I1205 20:33:41.233824  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.233832  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:41.233838  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:41.233890  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:41.269397  585602 cri.go:89] found id: ""
	I1205 20:33:41.269435  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.269447  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:41.269457  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:41.269529  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:41.303079  585602 cri.go:89] found id: ""
	I1205 20:33:41.303116  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.303128  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:41.303136  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:41.303196  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:41.337784  585602 cri.go:89] found id: ""
	I1205 20:33:41.337817  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.337826  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:41.337832  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:41.337901  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:41.371410  585602 cri.go:89] found id: ""
	I1205 20:33:41.371438  585602 logs.go:282] 0 containers: []
	W1205 20:33:41.371446  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:41.371456  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:41.371467  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:41.422768  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:41.422807  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:41.437427  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:41.437461  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:41.510875  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:41.510898  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:41.510915  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:41.590783  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:41.590826  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:39.667344  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:42.172287  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:42.043022  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:44.543222  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:41.120670  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:43.622132  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:45.623483  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:44.136390  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:44.149935  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:44.150006  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:44.187807  585602 cri.go:89] found id: ""
	I1205 20:33:44.187846  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.187858  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:44.187866  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:44.187933  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:44.224937  585602 cri.go:89] found id: ""
	I1205 20:33:44.224965  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.224973  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:44.224978  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:44.225040  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:44.260230  585602 cri.go:89] found id: ""
	I1205 20:33:44.260274  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.260287  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:44.260297  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:44.260439  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:44.296410  585602 cri.go:89] found id: ""
	I1205 20:33:44.296439  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.296449  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:44.296455  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:44.296507  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:44.332574  585602 cri.go:89] found id: ""
	I1205 20:33:44.332623  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.332635  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:44.332642  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:44.332709  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:44.368925  585602 cri.go:89] found id: ""
	I1205 20:33:44.368973  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.368985  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:44.368994  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:44.369068  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:44.410041  585602 cri.go:89] found id: ""
	I1205 20:33:44.410075  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.410088  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:44.410095  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:44.410165  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:44.454254  585602 cri.go:89] found id: ""
	I1205 20:33:44.454295  585602 logs.go:282] 0 containers: []
	W1205 20:33:44.454316  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:44.454330  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:44.454346  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:44.507604  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:44.507669  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:44.525172  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:44.525219  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:44.599417  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:44.599446  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:44.599465  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:44.681624  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:44.681685  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:44.664942  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:47.163452  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:47.043225  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:49.044675  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:48.120302  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:50.120568  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:47.230092  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:47.243979  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:47.244076  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:47.280346  585602 cri.go:89] found id: ""
	I1205 20:33:47.280376  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.280385  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:47.280392  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:47.280448  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:47.316454  585602 cri.go:89] found id: ""
	I1205 20:33:47.316479  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.316487  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:47.316493  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:47.316546  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:47.353339  585602 cri.go:89] found id: ""
	I1205 20:33:47.353374  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.353386  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:47.353395  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:47.353466  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:47.388256  585602 cri.go:89] found id: ""
	I1205 20:33:47.388319  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.388330  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:47.388339  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:47.388408  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:47.424907  585602 cri.go:89] found id: ""
	I1205 20:33:47.424942  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.424953  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:47.424961  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:47.425035  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:47.461386  585602 cri.go:89] found id: ""
	I1205 20:33:47.461416  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.461425  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:47.461431  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:47.461485  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:47.501092  585602 cri.go:89] found id: ""
	I1205 20:33:47.501121  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.501130  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:47.501136  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:47.501189  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:47.559478  585602 cri.go:89] found id: ""
	I1205 20:33:47.559507  585602 logs.go:282] 0 containers: []
	W1205 20:33:47.559520  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:47.559533  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:47.559551  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:47.609761  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:47.609800  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:47.626579  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:47.626606  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:47.713490  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:47.713520  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:47.713540  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:47.795346  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:47.795398  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:50.339441  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:50.353134  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:50.353216  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:50.393950  585602 cri.go:89] found id: ""
	I1205 20:33:50.393979  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.393990  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:50.394007  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:50.394074  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:50.431166  585602 cri.go:89] found id: ""
	I1205 20:33:50.431201  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.431212  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:50.431221  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:50.431291  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:50.472641  585602 cri.go:89] found id: ""
	I1205 20:33:50.472674  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.472684  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:50.472692  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:50.472763  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:50.512111  585602 cri.go:89] found id: ""
	I1205 20:33:50.512152  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.512165  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:50.512173  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:50.512247  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:50.554500  585602 cri.go:89] found id: ""
	I1205 20:33:50.554536  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.554549  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:50.554558  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:50.554625  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:50.590724  585602 cri.go:89] found id: ""
	I1205 20:33:50.590755  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.590764  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:50.590771  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:50.590837  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:50.628640  585602 cri.go:89] found id: ""
	I1205 20:33:50.628666  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.628675  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:50.628681  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:50.628732  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:50.670009  585602 cri.go:89] found id: ""
	I1205 20:33:50.670039  585602 logs.go:282] 0 containers: []
	W1205 20:33:50.670047  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:50.670063  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:50.670075  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:50.684236  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:50.684290  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:50.757761  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:50.757790  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:50.757813  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:50.839665  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:50.839720  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:50.881087  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:50.881122  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:49.164986  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:51.665655  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:51.543286  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:53.543689  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:52.621297  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:54.621764  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:53.433345  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:53.446747  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:53.446819  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:53.482928  585602 cri.go:89] found id: ""
	I1205 20:33:53.482967  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.482979  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:53.482988  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:53.483048  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:53.519096  585602 cri.go:89] found id: ""
	I1205 20:33:53.519128  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.519136  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:53.519142  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:53.519196  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:53.556207  585602 cri.go:89] found id: ""
	I1205 20:33:53.556233  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.556243  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:53.556249  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:53.556346  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:53.589708  585602 cri.go:89] found id: ""
	I1205 20:33:53.589736  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.589745  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:53.589758  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:53.589813  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:53.630344  585602 cri.go:89] found id: ""
	I1205 20:33:53.630371  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.630380  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:53.630386  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:53.630438  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:53.668895  585602 cri.go:89] found id: ""
	I1205 20:33:53.668921  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.668929  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:53.668935  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:53.668987  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:53.706601  585602 cri.go:89] found id: ""
	I1205 20:33:53.706628  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.706638  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:53.706644  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:53.706704  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:53.744922  585602 cri.go:89] found id: ""
	I1205 20:33:53.744952  585602 logs.go:282] 0 containers: []
	W1205 20:33:53.744960  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:53.744970  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:53.744989  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:53.823816  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:53.823853  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:53.823928  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:53.905075  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:53.905118  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:53.955424  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:53.955468  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:54.014871  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:54.014916  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:56.537142  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:56.550409  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:56.550478  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:56.587148  585602 cri.go:89] found id: ""
	I1205 20:33:56.587174  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.587184  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:56.587190  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:56.587249  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:56.625153  585602 cri.go:89] found id: ""
	I1205 20:33:56.625180  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.625188  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:56.625193  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:56.625243  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:56.671545  585602 cri.go:89] found id: ""
	I1205 20:33:56.671573  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.671582  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:56.671589  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:56.671652  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:56.712760  585602 cri.go:89] found id: ""
	I1205 20:33:56.712797  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.712810  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:56.712818  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:56.712890  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:56.751219  585602 cri.go:89] found id: ""
	I1205 20:33:56.751254  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.751266  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:56.751274  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:56.751340  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:56.787946  585602 cri.go:89] found id: ""
	I1205 20:33:56.787985  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.787998  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:56.788007  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:56.788101  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:56.823057  585602 cri.go:89] found id: ""
	I1205 20:33:56.823095  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.823108  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:56.823114  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:56.823170  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:33:54.164074  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:56.165063  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:56.043193  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:58.044158  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:00.542798  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:56.624407  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:59.119743  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:33:56.860358  585602 cri.go:89] found id: ""
	I1205 20:33:56.860396  585602 logs.go:282] 0 containers: []
	W1205 20:33:56.860408  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:33:56.860421  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:33:56.860438  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:33:56.912954  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:33:56.912996  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:33:56.927642  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:33:56.927691  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:33:57.007316  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:33:57.007344  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:33:57.007359  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:33:57.091471  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:33:57.091522  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:59.642150  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:33:59.656240  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:33:59.656324  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:33:59.695918  585602 cri.go:89] found id: ""
	I1205 20:33:59.695954  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.695965  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:33:59.695973  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:33:59.696037  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:33:59.744218  585602 cri.go:89] found id: ""
	I1205 20:33:59.744250  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.744260  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:33:59.744278  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:33:59.744340  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:33:59.799035  585602 cri.go:89] found id: ""
	I1205 20:33:59.799081  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.799094  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:33:59.799102  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:33:59.799172  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:33:59.850464  585602 cri.go:89] found id: ""
	I1205 20:33:59.850505  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.850517  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:33:59.850526  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:33:59.850590  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:33:59.886441  585602 cri.go:89] found id: ""
	I1205 20:33:59.886477  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.886489  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:33:59.886497  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:33:59.886564  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:33:59.926689  585602 cri.go:89] found id: ""
	I1205 20:33:59.926728  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.926741  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:33:59.926751  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:33:59.926821  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:33:59.962615  585602 cri.go:89] found id: ""
	I1205 20:33:59.962644  585602 logs.go:282] 0 containers: []
	W1205 20:33:59.962653  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:33:59.962659  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:33:59.962716  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:00.001852  585602 cri.go:89] found id: ""
	I1205 20:34:00.001878  585602 logs.go:282] 0 containers: []
	W1205 20:34:00.001886  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:00.001897  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:00.001913  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:00.055465  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:00.055508  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:00.071904  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:00.071941  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:00.151225  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:00.151248  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:00.151262  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:00.233869  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:00.233914  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:33:58.664773  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:00.664948  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:02.543019  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:04.543810  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:01.120136  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:03.120824  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:05.620283  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:02.776751  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:02.790868  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:02.790945  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:02.834686  585602 cri.go:89] found id: ""
	I1205 20:34:02.834719  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.834731  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:02.834740  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:02.834823  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:02.871280  585602 cri.go:89] found id: ""
	I1205 20:34:02.871313  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.871333  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:02.871342  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:02.871413  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:02.907300  585602 cri.go:89] found id: ""
	I1205 20:34:02.907336  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.907346  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:02.907352  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:02.907406  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:02.945453  585602 cri.go:89] found id: ""
	I1205 20:34:02.945487  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.945499  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:02.945511  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:02.945587  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:02.980528  585602 cri.go:89] found id: ""
	I1205 20:34:02.980561  585602 logs.go:282] 0 containers: []
	W1205 20:34:02.980573  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:02.980580  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:02.980653  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:03.016919  585602 cri.go:89] found id: ""
	I1205 20:34:03.016946  585602 logs.go:282] 0 containers: []
	W1205 20:34:03.016955  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:03.016961  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:03.017012  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:03.053541  585602 cri.go:89] found id: ""
	I1205 20:34:03.053575  585602 logs.go:282] 0 containers: []
	W1205 20:34:03.053588  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:03.053596  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:03.053655  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:03.089907  585602 cri.go:89] found id: ""
	I1205 20:34:03.089946  585602 logs.go:282] 0 containers: []
	W1205 20:34:03.089959  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:03.089974  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:03.089991  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:03.144663  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:03.144700  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:03.160101  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:03.160140  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:03.231559  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:03.231583  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:03.231600  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:03.313226  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:03.313271  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:05.855538  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:05.869019  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:05.869120  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:05.906879  585602 cri.go:89] found id: ""
	I1205 20:34:05.906910  585602 logs.go:282] 0 containers: []
	W1205 20:34:05.906921  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:05.906928  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:05.906994  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:05.946846  585602 cri.go:89] found id: ""
	I1205 20:34:05.946881  585602 logs.go:282] 0 containers: []
	W1205 20:34:05.946893  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:05.946900  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:05.946968  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:05.984067  585602 cri.go:89] found id: ""
	I1205 20:34:05.984104  585602 logs.go:282] 0 containers: []
	W1205 20:34:05.984118  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:05.984127  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:05.984193  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:06.024984  585602 cri.go:89] found id: ""
	I1205 20:34:06.025014  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.025023  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:06.025029  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:06.025091  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:06.064766  585602 cri.go:89] found id: ""
	I1205 20:34:06.064794  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.064806  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:06.064821  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:06.064877  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:06.105652  585602 cri.go:89] found id: ""
	I1205 20:34:06.105683  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.105691  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:06.105698  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:06.105748  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:06.143732  585602 cri.go:89] found id: ""
	I1205 20:34:06.143762  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.143773  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:06.143781  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:06.143857  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:06.183397  585602 cri.go:89] found id: ""
	I1205 20:34:06.183429  585602 logs.go:282] 0 containers: []
	W1205 20:34:06.183439  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:06.183449  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:06.183462  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:06.236403  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:06.236449  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:06.250728  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:06.250759  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:06.320983  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:06.321009  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:06.321025  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:06.408037  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:06.408084  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:03.164354  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:05.665345  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:07.044218  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:09.543580  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:08.119532  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:10.119918  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:08.955959  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:08.968956  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:08.969037  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:09.002804  585602 cri.go:89] found id: ""
	I1205 20:34:09.002846  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.002859  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:09.002866  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:09.002935  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:09.039098  585602 cri.go:89] found id: ""
	I1205 20:34:09.039191  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.039210  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:09.039220  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:09.039291  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:09.074727  585602 cri.go:89] found id: ""
	I1205 20:34:09.074764  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.074776  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:09.074792  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:09.074861  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:09.112650  585602 cri.go:89] found id: ""
	I1205 20:34:09.112682  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.112692  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:09.112698  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:09.112754  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:09.149301  585602 cri.go:89] found id: ""
	I1205 20:34:09.149346  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.149359  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:09.149368  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:09.149432  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:09.190288  585602 cri.go:89] found id: ""
	I1205 20:34:09.190317  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.190329  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:09.190338  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:09.190404  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:09.225311  585602 cri.go:89] found id: ""
	I1205 20:34:09.225348  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.225361  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:09.225369  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:09.225435  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:09.261023  585602 cri.go:89] found id: ""
	I1205 20:34:09.261052  585602 logs.go:282] 0 containers: []
	W1205 20:34:09.261063  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:09.261075  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:09.261092  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:09.313733  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:09.313785  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:09.329567  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:09.329619  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:09.403397  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:09.403430  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:09.403447  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:09.486586  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:09.486630  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:08.163730  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:10.663603  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:12.665663  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:11.544538  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:14.042854  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:12.120629  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:14.621977  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:12.028110  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:12.041802  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:12.041866  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:12.080349  585602 cri.go:89] found id: ""
	I1205 20:34:12.080388  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.080402  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:12.080410  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:12.080475  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:12.121455  585602 cri.go:89] found id: ""
	I1205 20:34:12.121486  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.121499  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:12.121507  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:12.121567  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:12.157743  585602 cri.go:89] found id: ""
	I1205 20:34:12.157768  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.157785  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:12.157794  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:12.157855  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:12.196901  585602 cri.go:89] found id: ""
	I1205 20:34:12.196933  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.196946  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:12.196954  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:12.197024  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:12.234471  585602 cri.go:89] found id: ""
	I1205 20:34:12.234500  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.234508  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:12.234516  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:12.234585  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:12.269238  585602 cri.go:89] found id: ""
	I1205 20:34:12.269263  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.269271  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:12.269278  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:12.269340  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:12.307965  585602 cri.go:89] found id: ""
	I1205 20:34:12.308006  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.308016  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:12.308022  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:12.308081  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:12.343463  585602 cri.go:89] found id: ""
	I1205 20:34:12.343497  585602 logs.go:282] 0 containers: []
	W1205 20:34:12.343510  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:12.343536  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:12.343574  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:12.393393  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:12.393437  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:12.407991  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:12.408025  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:12.477868  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:12.477910  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:12.477924  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:12.557274  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:12.557315  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:15.102587  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:15.115734  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:15.115808  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:15.153057  585602 cri.go:89] found id: ""
	I1205 20:34:15.153091  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.153105  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:15.153113  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:15.153182  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:15.192762  585602 cri.go:89] found id: ""
	I1205 20:34:15.192815  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.192825  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:15.192831  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:15.192887  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:15.231330  585602 cri.go:89] found id: ""
	I1205 20:34:15.231364  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.231374  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:15.231380  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:15.231435  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:15.265229  585602 cri.go:89] found id: ""
	I1205 20:34:15.265262  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.265271  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:15.265278  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:15.265350  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:15.299596  585602 cri.go:89] found id: ""
	I1205 20:34:15.299624  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.299634  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:15.299640  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:15.299699  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:15.336155  585602 cri.go:89] found id: ""
	I1205 20:34:15.336187  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.336195  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:15.336202  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:15.336256  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:15.371867  585602 cri.go:89] found id: ""
	I1205 20:34:15.371899  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.371909  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:15.371920  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:15.371976  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:15.408536  585602 cri.go:89] found id: ""
	I1205 20:34:15.408566  585602 logs.go:282] 0 containers: []
	W1205 20:34:15.408580  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:15.408592  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:15.408609  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:15.422499  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:15.422538  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:15.495096  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:15.495131  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:15.495145  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:15.571411  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:15.571461  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:15.612284  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:15.612319  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:15.165343  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:17.165619  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:16.043962  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:18.542495  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:17.119936  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:19.622046  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:18.168869  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:18.184247  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:18.184370  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:18.226078  585602 cri.go:89] found id: ""
	I1205 20:34:18.226112  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.226124  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:18.226133  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:18.226202  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:18.266221  585602 cri.go:89] found id: ""
	I1205 20:34:18.266258  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.266270  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:18.266278  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:18.266349  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:18.305876  585602 cri.go:89] found id: ""
	I1205 20:34:18.305903  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.305912  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:18.305921  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:18.305971  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:18.342044  585602 cri.go:89] found id: ""
	I1205 20:34:18.342077  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.342089  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:18.342098  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:18.342160  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:18.380240  585602 cri.go:89] found id: ""
	I1205 20:34:18.380290  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.380301  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:18.380310  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:18.380372  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:18.416228  585602 cri.go:89] found id: ""
	I1205 20:34:18.416258  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.416301  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:18.416311  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:18.416380  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:18.453368  585602 cri.go:89] found id: ""
	I1205 20:34:18.453407  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.453420  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:18.453429  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:18.453513  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:18.491689  585602 cri.go:89] found id: ""
	I1205 20:34:18.491727  585602 logs.go:282] 0 containers: []
	W1205 20:34:18.491739  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:18.491754  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:18.491779  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:18.546614  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:18.546652  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:18.560516  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:18.560547  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:18.637544  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:18.637568  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:18.637582  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:18.720410  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:18.720453  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:21.261494  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:21.276378  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:21.276473  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:21.317571  585602 cri.go:89] found id: ""
	I1205 20:34:21.317602  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.317610  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:21.317617  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:21.317670  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:21.355174  585602 cri.go:89] found id: ""
	I1205 20:34:21.355202  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.355210  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:21.355217  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:21.355277  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:21.393259  585602 cri.go:89] found id: ""
	I1205 20:34:21.393297  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.393310  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:21.393317  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:21.393408  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:21.432286  585602 cri.go:89] found id: ""
	I1205 20:34:21.432329  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.432341  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:21.432348  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:21.432415  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:21.469844  585602 cri.go:89] found id: ""
	I1205 20:34:21.469877  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.469888  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:21.469896  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:21.469964  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:21.508467  585602 cri.go:89] found id: ""
	I1205 20:34:21.508507  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.508519  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:21.508528  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:21.508592  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:21.553053  585602 cri.go:89] found id: ""
	I1205 20:34:21.553185  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.553208  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:21.553226  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:21.553317  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:21.590595  585602 cri.go:89] found id: ""
	I1205 20:34:21.590629  585602 logs.go:282] 0 containers: []
	W1205 20:34:21.590640  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:21.590654  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:21.590672  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:21.649493  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:21.649546  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:21.666114  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:21.666147  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:21.742801  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:21.742828  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:21.742858  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:21.822949  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:21.823010  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:19.165951  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:21.664450  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:21.043233  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:23.043477  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:25.543490  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:22.119177  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:24.119685  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:24.366575  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:24.380894  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:24.380992  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:24.416907  585602 cri.go:89] found id: ""
	I1205 20:34:24.416943  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.416956  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:24.416965  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:24.417034  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:24.453303  585602 cri.go:89] found id: ""
	I1205 20:34:24.453337  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.453349  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:24.453358  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:24.453445  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:24.496795  585602 cri.go:89] found id: ""
	I1205 20:34:24.496825  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.496833  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:24.496839  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:24.496907  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:24.539105  585602 cri.go:89] found id: ""
	I1205 20:34:24.539142  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.539154  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:24.539162  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:24.539230  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:24.576778  585602 cri.go:89] found id: ""
	I1205 20:34:24.576808  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.576816  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:24.576822  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:24.576879  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:24.617240  585602 cri.go:89] found id: ""
	I1205 20:34:24.617271  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.617280  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:24.617293  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:24.617374  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:24.659274  585602 cri.go:89] found id: ""
	I1205 20:34:24.659316  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.659330  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:24.659342  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:24.659408  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:24.701047  585602 cri.go:89] found id: ""
	I1205 20:34:24.701092  585602 logs.go:282] 0 containers: []
	W1205 20:34:24.701105  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:24.701121  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:24.701139  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:24.741070  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:24.741115  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:24.793364  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:24.793407  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:24.807803  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:24.807839  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:24.883194  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:24.883225  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:24.883243  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:24.163198  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:26.165402  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:27.544607  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:30.044244  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:26.619847  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:28.621467  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:30.621704  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:27.467460  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:27.483055  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:27.483129  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:27.523718  585602 cri.go:89] found id: ""
	I1205 20:34:27.523752  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.523763  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:27.523772  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:27.523841  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:27.562872  585602 cri.go:89] found id: ""
	I1205 20:34:27.562899  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.562908  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:27.562915  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:27.562976  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:27.601804  585602 cri.go:89] found id: ""
	I1205 20:34:27.601835  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.601845  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:27.601852  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:27.601916  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:27.640553  585602 cri.go:89] found id: ""
	I1205 20:34:27.640589  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.640599  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:27.640605  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:27.640672  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:27.680983  585602 cri.go:89] found id: ""
	I1205 20:34:27.681015  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.681027  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:27.681035  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:27.681105  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:27.720766  585602 cri.go:89] found id: ""
	I1205 20:34:27.720811  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.720821  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:27.720828  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:27.720886  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:27.761422  585602 cri.go:89] found id: ""
	I1205 20:34:27.761453  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.761466  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:27.761480  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:27.761550  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:27.799658  585602 cri.go:89] found id: ""
	I1205 20:34:27.799692  585602 logs.go:282] 0 containers: []
	W1205 20:34:27.799705  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:27.799720  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:27.799736  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:27.851801  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:27.851845  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:27.865953  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:27.865984  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:27.941787  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:27.941824  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:27.941840  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:28.023556  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:28.023616  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:30.573267  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:30.586591  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:30.586679  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:30.629923  585602 cri.go:89] found id: ""
	I1205 20:34:30.629960  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.629974  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:30.629982  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:30.630048  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:30.667045  585602 cri.go:89] found id: ""
	I1205 20:34:30.667078  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.667090  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:30.667098  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:30.667167  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:30.704479  585602 cri.go:89] found id: ""
	I1205 20:34:30.704510  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.704522  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:30.704530  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:30.704620  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:30.746035  585602 cri.go:89] found id: ""
	I1205 20:34:30.746065  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.746077  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:30.746085  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:30.746161  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:30.784375  585602 cri.go:89] found id: ""
	I1205 20:34:30.784415  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.784425  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:30.784431  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:30.784487  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:30.821779  585602 cri.go:89] found id: ""
	I1205 20:34:30.821811  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.821822  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:30.821831  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:30.821905  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:30.856927  585602 cri.go:89] found id: ""
	I1205 20:34:30.856963  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.856976  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:30.856984  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:30.857088  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:30.895852  585602 cri.go:89] found id: ""
	I1205 20:34:30.895882  585602 logs.go:282] 0 containers: []
	W1205 20:34:30.895894  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:30.895914  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:30.895930  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:30.947600  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:30.947642  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:30.962717  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:30.962753  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:31.049225  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:31.049262  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:31.049280  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:31.126806  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:31.126850  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:28.665006  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:31.164172  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:32.548634  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:35.042159  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:33.120370  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:35.621247  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:33.670844  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:33.685063  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:33.685160  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:33.718277  585602 cri.go:89] found id: ""
	I1205 20:34:33.718312  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.718321  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:33.718327  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:33.718378  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:33.755409  585602 cri.go:89] found id: ""
	I1205 20:34:33.755445  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.755456  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:33.755465  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:33.755542  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:33.809447  585602 cri.go:89] found id: ""
	I1205 20:34:33.809506  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.809519  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:33.809527  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:33.809599  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:33.848327  585602 cri.go:89] found id: ""
	I1205 20:34:33.848362  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.848376  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:33.848384  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:33.848444  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:33.887045  585602 cri.go:89] found id: ""
	I1205 20:34:33.887082  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.887094  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:33.887103  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:33.887178  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:33.924385  585602 cri.go:89] found id: ""
	I1205 20:34:33.924418  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.924427  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:33.924434  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:33.924499  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:33.960711  585602 cri.go:89] found id: ""
	I1205 20:34:33.960738  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.960747  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:33.960757  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:33.960808  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:33.998150  585602 cri.go:89] found id: ""
	I1205 20:34:33.998184  585602 logs.go:282] 0 containers: []
	W1205 20:34:33.998193  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:33.998203  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:33.998215  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:34.041977  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:34.042006  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:34.095895  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:34.095940  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:34.109802  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:34.109836  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:34.185716  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:34.185740  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:34.185753  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:36.767768  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:36.782114  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:36.782201  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:36.820606  585602 cri.go:89] found id: ""
	I1205 20:34:36.820647  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.820659  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:36.820668  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:36.820736  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:33.164572  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:35.664069  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:37.043102  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:39.544667  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:38.120555  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:40.619948  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:36.858999  585602 cri.go:89] found id: ""
	I1205 20:34:36.859033  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.859044  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:36.859051  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:36.859117  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:36.896222  585602 cri.go:89] found id: ""
	I1205 20:34:36.896257  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.896282  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:36.896290  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:36.896352  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:36.935565  585602 cri.go:89] found id: ""
	I1205 20:34:36.935602  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.935612  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:36.935618  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:36.935671  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:36.974031  585602 cri.go:89] found id: ""
	I1205 20:34:36.974066  585602 logs.go:282] 0 containers: []
	W1205 20:34:36.974079  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:36.974096  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:36.974166  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:37.018243  585602 cri.go:89] found id: ""
	I1205 20:34:37.018278  585602 logs.go:282] 0 containers: []
	W1205 20:34:37.018290  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:37.018300  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:37.018371  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:37.057715  585602 cri.go:89] found id: ""
	I1205 20:34:37.057742  585602 logs.go:282] 0 containers: []
	W1205 20:34:37.057750  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:37.057756  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:37.057806  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:37.099006  585602 cri.go:89] found id: ""
	I1205 20:34:37.099037  585602 logs.go:282] 0 containers: []
	W1205 20:34:37.099045  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:37.099055  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:37.099070  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:37.186218  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:37.186264  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:37.232921  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:37.232955  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:37.285539  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:37.285581  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:37.301115  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:37.301155  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:37.373249  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:39.873692  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:39.887772  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:39.887847  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:39.925558  585602 cri.go:89] found id: ""
	I1205 20:34:39.925595  585602 logs.go:282] 0 containers: []
	W1205 20:34:39.925607  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:39.925615  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:39.925684  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:39.964967  585602 cri.go:89] found id: ""
	I1205 20:34:39.964994  585602 logs.go:282] 0 containers: []
	W1205 20:34:39.965004  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:39.965011  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:39.965073  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:40.010875  585602 cri.go:89] found id: ""
	I1205 20:34:40.010911  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.010923  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:40.010930  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:40.011003  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:40.050940  585602 cri.go:89] found id: ""
	I1205 20:34:40.050970  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.050981  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:40.050990  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:40.051052  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:40.086157  585602 cri.go:89] found id: ""
	I1205 20:34:40.086197  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.086210  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:40.086219  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:40.086283  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:40.123280  585602 cri.go:89] found id: ""
	I1205 20:34:40.123321  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.123333  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:40.123344  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:40.123414  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:40.164755  585602 cri.go:89] found id: ""
	I1205 20:34:40.164784  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.164793  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:40.164800  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:40.164871  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:40.211566  585602 cri.go:89] found id: ""
	I1205 20:34:40.211595  585602 logs.go:282] 0 containers: []
	W1205 20:34:40.211608  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:40.211621  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:40.211638  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:40.275269  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:40.275326  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:40.303724  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:40.303754  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:40.377315  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:40.377345  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:40.377360  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:40.457744  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:40.457794  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:38.163598  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:40.164173  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:42.663952  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:42.043947  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:44.542445  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:42.621824  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:45.120127  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:43.000390  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:43.015220  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:43.015308  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:43.051919  585602 cri.go:89] found id: ""
	I1205 20:34:43.051946  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.051955  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:43.051961  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:43.052034  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:43.088188  585602 cri.go:89] found id: ""
	I1205 20:34:43.088230  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.088241  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:43.088249  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:43.088350  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:43.125881  585602 cri.go:89] found id: ""
	I1205 20:34:43.125910  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.125922  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:43.125930  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:43.125988  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:43.166630  585602 cri.go:89] found id: ""
	I1205 20:34:43.166657  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.166674  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:43.166682  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:43.166744  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:43.206761  585602 cri.go:89] found id: ""
	I1205 20:34:43.206791  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.206803  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:43.206810  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:43.206873  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:43.242989  585602 cri.go:89] found id: ""
	I1205 20:34:43.243017  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.243026  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:43.243033  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:43.243094  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:43.281179  585602 cri.go:89] found id: ""
	I1205 20:34:43.281208  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.281217  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:43.281223  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:43.281272  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:43.317283  585602 cri.go:89] found id: ""
	I1205 20:34:43.317314  585602 logs.go:282] 0 containers: []
	W1205 20:34:43.317326  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:43.317347  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:43.317362  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:43.369262  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:43.369303  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:43.386137  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:43.386182  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:43.458532  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:43.458553  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:43.458566  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:43.538254  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:43.538296  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:46.083593  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:46.101024  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:46.101133  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:46.169786  585602 cri.go:89] found id: ""
	I1205 20:34:46.169817  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.169829  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:46.169838  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:46.169905  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:46.218647  585602 cri.go:89] found id: ""
	I1205 20:34:46.218689  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.218704  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:46.218713  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:46.218790  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:46.262718  585602 cri.go:89] found id: ""
	I1205 20:34:46.262749  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.262758  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:46.262764  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:46.262846  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:46.301606  585602 cri.go:89] found id: ""
	I1205 20:34:46.301638  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.301649  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:46.301656  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:46.301714  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:46.337313  585602 cri.go:89] found id: ""
	I1205 20:34:46.337347  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.337356  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:46.337362  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:46.337422  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:46.380171  585602 cri.go:89] found id: ""
	I1205 20:34:46.380201  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.380209  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:46.380215  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:46.380288  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:46.423054  585602 cri.go:89] found id: ""
	I1205 20:34:46.423089  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.423101  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:46.423109  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:46.423178  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:46.467615  585602 cri.go:89] found id: ""
	I1205 20:34:46.467647  585602 logs.go:282] 0 containers: []
	W1205 20:34:46.467659  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:46.467673  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:46.467687  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:46.522529  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:46.522579  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:46.537146  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:46.537199  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:46.609585  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:46.609618  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:46.609637  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:46.696093  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:46.696152  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:45.164249  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:47.664159  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:46.547883  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:49.043793  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:47.623375  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:50.122680  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:49.238735  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:49.256406  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:49.256484  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:49.294416  585602 cri.go:89] found id: ""
	I1205 20:34:49.294449  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.294458  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:49.294467  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:49.294528  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:49.334235  585602 cri.go:89] found id: ""
	I1205 20:34:49.334268  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.334282  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:49.334290  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:49.334362  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:49.372560  585602 cri.go:89] found id: ""
	I1205 20:34:49.372637  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.372662  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:49.372674  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:49.372756  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:49.413779  585602 cri.go:89] found id: ""
	I1205 20:34:49.413813  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.413822  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:49.413829  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:49.413900  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:49.449513  585602 cri.go:89] found id: ""
	I1205 20:34:49.449543  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.449553  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:49.449560  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:49.449630  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:49.488923  585602 cri.go:89] found id: ""
	I1205 20:34:49.488961  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.488973  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:49.488982  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:49.489050  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:49.524922  585602 cri.go:89] found id: ""
	I1205 20:34:49.524959  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.524971  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:49.524980  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:49.525048  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:49.565700  585602 cri.go:89] found id: ""
	I1205 20:34:49.565735  585602 logs.go:282] 0 containers: []
	W1205 20:34:49.565745  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:49.565756  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:49.565769  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:49.624297  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:49.624339  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:49.641424  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:49.641465  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:49.721474  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:49.721504  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:49.721517  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:49.810777  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:49.810822  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:49.664998  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:52.163337  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:51.543015  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:54.045218  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:52.621649  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:55.120035  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:52.354661  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:52.368481  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:52.368555  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:52.407081  585602 cri.go:89] found id: ""
	I1205 20:34:52.407110  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.407118  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:52.407125  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:52.407189  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:52.444462  585602 cri.go:89] found id: ""
	I1205 20:34:52.444489  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.444498  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:52.444505  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:52.444562  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:52.483546  585602 cri.go:89] found id: ""
	I1205 20:34:52.483573  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.483582  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:52.483595  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:52.483648  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:52.526529  585602 cri.go:89] found id: ""
	I1205 20:34:52.526567  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.526579  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:52.526587  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:52.526655  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:52.564875  585602 cri.go:89] found id: ""
	I1205 20:34:52.564904  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.564913  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:52.564919  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:52.564984  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:52.599367  585602 cri.go:89] found id: ""
	I1205 20:34:52.599397  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.599410  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:52.599419  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:52.599475  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:52.638192  585602 cri.go:89] found id: ""
	I1205 20:34:52.638233  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.638247  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:52.638255  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:52.638336  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:52.675227  585602 cri.go:89] found id: ""
	I1205 20:34:52.675264  585602 logs.go:282] 0 containers: []
	W1205 20:34:52.675275  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:52.675287  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:52.675311  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:52.716538  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:52.716582  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:52.772121  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:52.772162  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:52.787598  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:52.787632  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:52.865380  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:52.865408  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:52.865422  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:55.449288  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:55.462386  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:55.462474  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:55.498350  585602 cri.go:89] found id: ""
	I1205 20:34:55.498382  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.498391  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:55.498397  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:55.498457  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:55.540878  585602 cri.go:89] found id: ""
	I1205 20:34:55.540915  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.540929  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:55.540939  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:55.541022  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:55.577248  585602 cri.go:89] found id: ""
	I1205 20:34:55.577277  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.577288  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:55.577294  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:55.577375  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:55.615258  585602 cri.go:89] found id: ""
	I1205 20:34:55.615287  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.615308  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:55.615316  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:55.615384  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:55.652102  585602 cri.go:89] found id: ""
	I1205 20:34:55.652136  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.652147  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:55.652157  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:55.652228  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:55.689353  585602 cri.go:89] found id: ""
	I1205 20:34:55.689387  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.689399  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:55.689408  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:55.689486  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:55.727603  585602 cri.go:89] found id: ""
	I1205 20:34:55.727634  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.727648  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:55.727657  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:55.727729  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:55.765103  585602 cri.go:89] found id: ""
	I1205 20:34:55.765134  585602 logs.go:282] 0 containers: []
	W1205 20:34:55.765143  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:55.765156  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:55.765169  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:34:55.823878  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:55.823923  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:55.838966  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:55.839001  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:55.909385  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:55.909412  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:55.909424  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:55.992036  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:55.992080  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:54.165488  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:56.166030  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:56.542663  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:58.543260  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:57.120140  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:59.621190  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:34:58.537231  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:34:58.552307  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:34:58.552392  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:34:58.589150  585602 cri.go:89] found id: ""
	I1205 20:34:58.589184  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.589200  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:34:58.589206  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:34:58.589272  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:34:58.630344  585602 cri.go:89] found id: ""
	I1205 20:34:58.630370  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.630378  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:34:58.630385  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:34:58.630452  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:34:58.669953  585602 cri.go:89] found id: ""
	I1205 20:34:58.669981  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.669991  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:34:58.669999  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:34:58.670055  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:34:58.708532  585602 cri.go:89] found id: ""
	I1205 20:34:58.708562  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.708570  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:34:58.708577  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:34:58.708631  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:34:58.745944  585602 cri.go:89] found id: ""
	I1205 20:34:58.745975  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.745986  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:34:58.745994  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:34:58.746051  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:58.787177  585602 cri.go:89] found id: ""
	I1205 20:34:58.787206  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.787214  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:34:58.787221  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:34:58.787272  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:34:58.822084  585602 cri.go:89] found id: ""
	I1205 20:34:58.822123  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.822134  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:34:58.822142  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:34:58.822210  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:34:58.858608  585602 cri.go:89] found id: ""
	I1205 20:34:58.858645  585602 logs.go:282] 0 containers: []
	W1205 20:34:58.858657  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:34:58.858670  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:34:58.858691  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:34:58.873289  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:34:58.873322  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:34:58.947855  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:34:58.947884  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:34:58.947900  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:34:59.028348  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:34:59.028397  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:34:59.069172  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:34:59.069206  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:01.623309  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:01.637362  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:01.637449  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:01.678867  585602 cri.go:89] found id: ""
	I1205 20:35:01.678907  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.678919  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:01.678928  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:01.679001  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:01.715333  585602 cri.go:89] found id: ""
	I1205 20:35:01.715364  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.715372  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:01.715379  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:01.715439  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:01.754247  585602 cri.go:89] found id: ""
	I1205 20:35:01.754277  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.754286  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:01.754292  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:01.754348  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:01.791922  585602 cri.go:89] found id: ""
	I1205 20:35:01.791957  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.791968  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:01.791977  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:01.792045  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:01.827261  585602 cri.go:89] found id: ""
	I1205 20:35:01.827294  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.827307  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:01.827315  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:01.827389  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:34:58.665248  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:01.163431  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:01.043056  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:03.543015  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:02.122540  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:04.620544  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:01.864205  585602 cri.go:89] found id: ""
	I1205 20:35:01.864234  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.864243  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:01.864249  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:01.864332  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:01.902740  585602 cri.go:89] found id: ""
	I1205 20:35:01.902773  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.902783  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:01.902789  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:01.902857  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:01.941627  585602 cri.go:89] found id: ""
	I1205 20:35:01.941657  585602 logs.go:282] 0 containers: []
	W1205 20:35:01.941666  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:01.941677  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:01.941690  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:01.995743  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:01.995791  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:02.010327  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:02.010368  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:02.086879  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:02.086907  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:02.086921  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:02.166500  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:02.166538  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:04.716638  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:04.730922  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:04.730992  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:04.768492  585602 cri.go:89] found id: ""
	I1205 20:35:04.768524  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.768534  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:04.768540  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:04.768606  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:04.803740  585602 cri.go:89] found id: ""
	I1205 20:35:04.803776  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.803789  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:04.803797  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:04.803866  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:04.840907  585602 cri.go:89] found id: ""
	I1205 20:35:04.840947  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.840960  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:04.840968  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:04.841036  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:04.875901  585602 cri.go:89] found id: ""
	I1205 20:35:04.875933  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.875943  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:04.875949  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:04.876003  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:04.913581  585602 cri.go:89] found id: ""
	I1205 20:35:04.913617  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.913627  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:04.913634  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:04.913689  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:04.952460  585602 cri.go:89] found id: ""
	I1205 20:35:04.952504  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.952519  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:04.952528  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:04.952617  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:04.989939  585602 cri.go:89] found id: ""
	I1205 20:35:04.989968  585602 logs.go:282] 0 containers: []
	W1205 20:35:04.989979  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:04.989985  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:04.990041  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:05.025017  585602 cri.go:89] found id: ""
	I1205 20:35:05.025052  585602 logs.go:282] 0 containers: []
	W1205 20:35:05.025066  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:05.025078  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:05.025094  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:05.068179  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:05.068223  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:05.127311  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:05.127369  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:05.141092  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:05.141129  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:05.217648  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:05.217678  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:05.217691  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:03.163987  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:05.164131  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:07.165804  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:06.043765  585113 pod_ready.go:103] pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:08.036400  585113 pod_ready.go:82] duration metric: took 4m0.000157493s for pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace to be "Ready" ...
	E1205 20:35:08.036457  585113 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-tlsjl" in "kube-system" namespace to be "Ready" (will not retry!)
	I1205 20:35:08.036489  585113 pod_ready.go:39] duration metric: took 4m11.05050249s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:35:08.036554  585113 kubeadm.go:597] duration metric: took 4m18.178903617s to restartPrimaryControlPlane
	W1205 20:35:08.036733  585113 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 20:35:08.036784  585113 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:35:06.621887  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:09.119692  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:07.793457  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:07.808710  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:07.808778  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:07.846331  585602 cri.go:89] found id: ""
	I1205 20:35:07.846366  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.846380  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:07.846389  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:07.846462  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:07.881185  585602 cri.go:89] found id: ""
	I1205 20:35:07.881222  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.881236  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:07.881243  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:07.881307  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:07.918463  585602 cri.go:89] found id: ""
	I1205 20:35:07.918501  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.918514  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:07.918522  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:07.918589  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:07.956329  585602 cri.go:89] found id: ""
	I1205 20:35:07.956364  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.956375  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:07.956385  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:07.956456  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:07.992173  585602 cri.go:89] found id: ""
	I1205 20:35:07.992212  585602 logs.go:282] 0 containers: []
	W1205 20:35:07.992222  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:07.992229  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:07.992318  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:08.030183  585602 cri.go:89] found id: ""
	I1205 20:35:08.030214  585602 logs.go:282] 0 containers: []
	W1205 20:35:08.030226  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:08.030235  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:08.030309  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:08.072320  585602 cri.go:89] found id: ""
	I1205 20:35:08.072362  585602 logs.go:282] 0 containers: []
	W1205 20:35:08.072374  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:08.072382  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:08.072452  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:08.124220  585602 cri.go:89] found id: ""
	I1205 20:35:08.124253  585602 logs.go:282] 0 containers: []
	W1205 20:35:08.124277  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:08.124292  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:08.124310  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:08.171023  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:08.171057  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:08.237645  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:08.237699  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:08.252708  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:08.252744  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:08.343107  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:08.343140  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:08.343158  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:10.919646  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:10.934494  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:10.934562  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:10.971816  585602 cri.go:89] found id: ""
	I1205 20:35:10.971855  585602 logs.go:282] 0 containers: []
	W1205 20:35:10.971868  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:10.971878  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:10.971950  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:11.010031  585602 cri.go:89] found id: ""
	I1205 20:35:11.010071  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.010084  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:11.010095  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:11.010170  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:11.046520  585602 cri.go:89] found id: ""
	I1205 20:35:11.046552  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.046561  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:11.046568  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:11.046632  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:11.081385  585602 cri.go:89] found id: ""
	I1205 20:35:11.081426  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.081440  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:11.081448  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:11.081522  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:11.122529  585602 cri.go:89] found id: ""
	I1205 20:35:11.122559  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.122568  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:11.122576  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:11.122656  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:11.161684  585602 cri.go:89] found id: ""
	I1205 20:35:11.161767  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.161788  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:11.161797  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:11.161862  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:11.199796  585602 cri.go:89] found id: ""
	I1205 20:35:11.199824  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.199833  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:11.199842  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:11.199916  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:11.235580  585602 cri.go:89] found id: ""
	I1205 20:35:11.235617  585602 logs.go:282] 0 containers: []
	W1205 20:35:11.235625  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:11.235635  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:11.235647  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:11.291005  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:11.291055  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:11.305902  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:11.305947  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:11.375862  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:11.375894  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:11.375915  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:11.456701  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:11.456746  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:09.663952  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:11.664200  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:11.119954  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:13.120903  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:15.622247  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:14.006509  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:14.020437  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:14.020531  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:14.056878  585602 cri.go:89] found id: ""
	I1205 20:35:14.056905  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.056915  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:14.056923  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:14.056993  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:14.091747  585602 cri.go:89] found id: ""
	I1205 20:35:14.091782  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.091792  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:14.091800  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:14.091860  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:14.131409  585602 cri.go:89] found id: ""
	I1205 20:35:14.131440  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.131453  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:14.131461  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:14.131532  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:14.170726  585602 cri.go:89] found id: ""
	I1205 20:35:14.170754  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.170765  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:14.170773  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:14.170851  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:14.208619  585602 cri.go:89] found id: ""
	I1205 20:35:14.208654  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.208666  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:14.208674  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:14.208747  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:14.247734  585602 cri.go:89] found id: ""
	I1205 20:35:14.247771  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.247784  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:14.247793  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:14.247855  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:14.296090  585602 cri.go:89] found id: ""
	I1205 20:35:14.296119  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.296129  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:14.296136  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:14.296205  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:14.331009  585602 cri.go:89] found id: ""
	I1205 20:35:14.331037  585602 logs.go:282] 0 containers: []
	W1205 20:35:14.331045  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:14.331057  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:14.331070  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:14.384877  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:14.384935  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:14.400458  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:14.400507  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:14.475745  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:14.475774  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:14.475787  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:14.553150  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:14.553192  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:14.164516  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:16.165316  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:18.119418  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:20.120499  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:17.095700  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:17.109135  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:35:17.109215  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:35:17.146805  585602 cri.go:89] found id: ""
	I1205 20:35:17.146838  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.146851  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:35:17.146861  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:35:17.146919  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:35:17.186861  585602 cri.go:89] found id: ""
	I1205 20:35:17.186891  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.186901  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:35:17.186907  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:35:17.186960  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:35:17.223113  585602 cri.go:89] found id: ""
	I1205 20:35:17.223148  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.223159  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:35:17.223166  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:35:17.223238  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:35:17.263066  585602 cri.go:89] found id: ""
	I1205 20:35:17.263098  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.263110  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:35:17.263118  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:35:17.263187  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:35:17.300113  585602 cri.go:89] found id: ""
	I1205 20:35:17.300153  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.300167  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:35:17.300175  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:35:17.300237  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:35:17.339135  585602 cri.go:89] found id: ""
	I1205 20:35:17.339172  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.339184  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:35:17.339193  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:35:17.339260  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:35:17.376200  585602 cri.go:89] found id: ""
	I1205 20:35:17.376229  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.376239  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:35:17.376248  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:35:17.376354  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:35:17.411852  585602 cri.go:89] found id: ""
	I1205 20:35:17.411895  585602 logs.go:282] 0 containers: []
	W1205 20:35:17.411906  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:35:17.411919  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:35:17.411948  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:35:17.463690  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:35:17.463729  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:35:17.478912  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:35:17.478946  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:35:17.552874  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:35:17.552907  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:35:17.552933  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:35:17.633621  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:35:17.633667  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:35:20.175664  585602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:20.191495  585602 kubeadm.go:597] duration metric: took 4m4.568774806s to restartPrimaryControlPlane
	W1205 20:35:20.191570  585602 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 20:35:20.191594  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:35:20.660014  585602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:35:20.676684  585602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:35:20.688338  585602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:35:20.699748  585602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:35:20.699770  585602 kubeadm.go:157] found existing configuration files:
	
	I1205 20:35:20.699822  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:35:20.710417  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:35:20.710497  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:35:20.722295  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:35:20.732854  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:35:20.732933  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:35:20.744242  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:35:20.754593  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:35:20.754671  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:35:20.766443  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:35:20.777087  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:35:20.777157  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:35:20.788406  585602 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:35:20.869602  585602 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 20:35:20.869778  585602 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:35:21.022417  585602 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:35:21.022558  585602 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:35:21.022715  585602 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:35:21.213817  585602 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:35:21.216995  585602 out.go:235]   - Generating certificates and keys ...
	I1205 20:35:21.217146  585602 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:35:21.217240  585602 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:35:21.217373  585602 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:35:21.217502  585602 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:35:21.217614  585602 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:35:21.217699  585602 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 20:35:21.217784  585602 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:35:21.217876  585602 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:35:21.217985  585602 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:35:21.218129  585602 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:35:21.218186  585602 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 20:35:21.218289  585602 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:35:21.337924  585602 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:35:21.464355  585602 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:35:21.709734  585602 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:35:21.837040  585602 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:35:21.860767  585602 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:35:21.860894  585602 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:35:21.860934  585602 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:35:22.002564  585602 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:35:18.663978  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:20.665113  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:22.622593  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:25.120101  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:22.004407  585602 out.go:235]   - Booting up control plane ...
	I1205 20:35:22.004560  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:35:22.009319  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:35:22.010412  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:35:22.019041  585602 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:35:22.021855  585602 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:35:23.163493  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:25.164833  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:27.164914  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:27.619140  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:29.622476  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:29.664525  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:32.163413  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:34.411201  585113 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.37438104s)
	I1205 20:35:34.411295  585113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:35:34.428580  585113 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:35:34.439233  585113 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:35:34.450165  585113 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:35:34.450192  585113 kubeadm.go:157] found existing configuration files:
	
	I1205 20:35:34.450255  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:35:34.461910  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:35:34.461985  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:35:34.473936  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:35:34.484160  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:35:34.484240  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:35:34.495772  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:35:34.507681  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:35:34.507757  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:35:34.519932  585113 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:35:34.532111  585113 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:35:34.532190  585113 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:35:34.543360  585113 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:35:34.594095  585113 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 20:35:34.594214  585113 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:35:34.712502  585113 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:35:34.712685  585113 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:35:34.712818  585113 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 20:35:34.729419  585113 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:35:34.731281  585113 out.go:235]   - Generating certificates and keys ...
	I1205 20:35:34.731395  585113 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:35:34.731486  585113 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:35:34.731614  585113 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:35:34.731715  585113 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:35:34.731812  585113 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:35:34.731902  585113 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 20:35:34.731994  585113 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:35:34.732082  585113 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:35:34.732179  585113 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:35:34.732252  585113 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:35:34.732336  585113 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 20:35:34.732428  585113 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:35:35.125135  585113 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:35:35.188591  585113 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 20:35:35.330713  585113 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:35:35.497785  585113 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:35:35.839010  585113 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:35:35.839656  585113 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:35:35.842311  585113 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:35:32.118898  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:34.119153  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:34.164007  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:36.164138  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:35.844403  585113 out.go:235]   - Booting up control plane ...
	I1205 20:35:35.844534  585113 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:35:35.844602  585113 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:35:35.845242  585113 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:35:35.865676  585113 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:35:35.871729  585113 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:35:35.871825  585113 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:35:36.007728  585113 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 20:35:36.007948  585113 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 20:35:36.510090  585113 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.141078ms
	I1205 20:35:36.510208  585113 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 20:35:36.119432  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:38.121093  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:40.620523  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:41.512166  585113 kubeadm.go:310] [api-check] The API server is healthy after 5.00243802s
	I1205 20:35:41.529257  585113 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:35:41.545958  585113 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:35:41.585500  585113 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:35:41.585726  585113 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-789000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:35:41.606394  585113 kubeadm.go:310] [bootstrap-token] Using token: j30n5x.myrhz9pya6yl1f1z
	I1205 20:35:41.608046  585113 out.go:235]   - Configuring RBAC rules ...
	I1205 20:35:41.608229  585113 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:35:41.616083  585113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:35:41.625777  585113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:35:41.629934  585113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:35:41.633726  585113 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:35:41.640454  585113 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:35:41.923125  585113 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:35:42.363841  585113 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 20:35:42.924569  585113 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 20:35:42.924594  585113 kubeadm.go:310] 
	I1205 20:35:42.924660  585113 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 20:35:42.924668  585113 kubeadm.go:310] 
	I1205 20:35:42.924750  585113 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 20:35:42.924768  585113 kubeadm.go:310] 
	I1205 20:35:42.924802  585113 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 20:35:42.924865  585113 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:35:42.924926  585113 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:35:42.924969  585113 kubeadm.go:310] 
	I1205 20:35:42.925060  585113 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 20:35:42.925069  585113 kubeadm.go:310] 
	I1205 20:35:42.925120  585113 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:35:42.925154  585113 kubeadm.go:310] 
	I1205 20:35:42.925255  585113 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 20:35:42.925374  585113 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:35:42.925477  585113 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:35:42.925488  585113 kubeadm.go:310] 
	I1205 20:35:42.925604  585113 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:35:42.925691  585113 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 20:35:42.925701  585113 kubeadm.go:310] 
	I1205 20:35:42.925830  585113 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token j30n5x.myrhz9pya6yl1f1z \
	I1205 20:35:42.925966  585113 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 \
	I1205 20:35:42.926019  585113 kubeadm.go:310] 	--control-plane 
	I1205 20:35:42.926034  585113 kubeadm.go:310] 
	I1205 20:35:42.926136  585113 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:35:42.926147  585113 kubeadm.go:310] 
	I1205 20:35:42.926258  585113 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token j30n5x.myrhz9pya6yl1f1z \
	I1205 20:35:42.926400  585113 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 
	I1205 20:35:42.927105  585113 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:35:42.927269  585113 cni.go:84] Creating CNI manager for ""
	I1205 20:35:42.927283  585113 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:35:42.929046  585113 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:35:38.164698  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:40.665499  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:42.930620  585113 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:35:42.941706  585113 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:35:42.964041  585113 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:35:42.964154  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:42.964191  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-789000 minikube.k8s.io/updated_at=2024_12_05T20_35_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=embed-certs-789000 minikube.k8s.io/primary=true
	I1205 20:35:43.027876  585113 ops.go:34] apiserver oom_adj: -16
	I1205 20:35:43.203087  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:43.703446  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:44.203895  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:44.703277  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:45.203421  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:42.623820  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:45.118957  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:45.704129  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:46.203682  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:46.703213  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:47.203225  585113 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:35:47.330051  585113 kubeadm.go:1113] duration metric: took 4.365966546s to wait for elevateKubeSystemPrivileges
	I1205 20:35:47.330104  585113 kubeadm.go:394] duration metric: took 4m57.530103825s to StartCluster
	I1205 20:35:47.330143  585113 settings.go:142] acquiring lock: {Name:mk53b9e6d652790a330d8f10370186624dd74692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:35:47.330296  585113 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:35:47.332937  585113 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:35:47.333273  585113 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:35:47.333380  585113 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 20:35:47.333478  585113 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-789000"
	I1205 20:35:47.333500  585113 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-789000"
	I1205 20:35:47.333499  585113 addons.go:69] Setting default-storageclass=true in profile "embed-certs-789000"
	W1205 20:35:47.333510  585113 addons.go:243] addon storage-provisioner should already be in state true
	I1205 20:35:47.333523  585113 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-789000"
	I1205 20:35:47.333545  585113 host.go:66] Checking if "embed-certs-789000" exists ...
	I1205 20:35:47.333554  585113 config.go:182] Loaded profile config "embed-certs-789000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:35:47.333631  585113 addons.go:69] Setting metrics-server=true in profile "embed-certs-789000"
	I1205 20:35:47.333651  585113 addons.go:234] Setting addon metrics-server=true in "embed-certs-789000"
	W1205 20:35:47.333660  585113 addons.go:243] addon metrics-server should already be in state true
	I1205 20:35:47.333692  585113 host.go:66] Checking if "embed-certs-789000" exists ...
	I1205 20:35:47.334001  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.334043  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.334003  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.334101  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.334157  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.334339  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.335448  585113 out.go:177] * Verifying Kubernetes components...
	I1205 20:35:47.337056  585113 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:35:47.353039  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33827
	I1205 20:35:47.353726  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.354437  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.354467  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.354870  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.355580  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.355654  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.355702  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43665
	I1205 20:35:47.355760  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46205
	I1205 20:35:47.356180  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.356224  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.356771  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.356796  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.356815  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.356834  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.357246  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.357245  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.357640  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetState
	I1205 20:35:47.357862  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.357916  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.361951  585113 addons.go:234] Setting addon default-storageclass=true in "embed-certs-789000"
	W1205 20:35:47.361974  585113 addons.go:243] addon default-storageclass should already be in state true
	I1205 20:35:47.362004  585113 host.go:66] Checking if "embed-certs-789000" exists ...
	I1205 20:35:47.362369  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.362416  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.372862  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37823
	I1205 20:35:47.373465  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.373983  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.374011  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.374347  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.374570  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetState
	I1205 20:35:47.376329  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:35:47.378476  585113 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:35:47.379882  585113 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:35:47.379909  585113 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:35:47.379933  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:35:47.382045  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44707
	I1205 20:35:47.382855  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.383440  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.383459  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.383563  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.383828  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.384092  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetState
	I1205 20:35:47.384101  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:35:47.384117  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.384150  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39829
	I1205 20:35:47.384381  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:35:47.384517  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:35:47.384635  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.384705  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:35:47.384850  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:35:47.385249  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.385262  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.385613  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.385744  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:35:47.386054  585113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:47.386085  585113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:47.387649  585113 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:35:43.164980  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:45.665449  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:47.665725  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:47.388998  585113 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:35:47.389011  585113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:35:47.389025  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:35:47.391724  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.392285  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:35:47.392317  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.392362  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:35:47.392521  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:35:47.392663  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:35:47.392804  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:35:47.402558  585113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45343
	I1205 20:35:47.403109  585113 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:47.403636  585113 main.go:141] libmachine: Using API Version  1
	I1205 20:35:47.403653  585113 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:47.403977  585113 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:47.404155  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetState
	I1205 20:35:47.405636  585113 main.go:141] libmachine: (embed-certs-789000) Calling .DriverName
	I1205 20:35:47.405859  585113 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:35:47.405876  585113 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:35:47.405894  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHHostname
	I1205 20:35:47.408366  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.408827  585113 main.go:141] libmachine: (embed-certs-789000) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:ae:b2", ip: ""} in network mk-embed-certs-789000: {Iface:virbr1 ExpiryTime:2024-12-05 21:30:35 +0000 UTC Type:0 Mac:52:54:00:48:ae:b2 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:embed-certs-789000 Clientid:01:52:54:00:48:ae:b2}
	I1205 20:35:47.408868  585113 main.go:141] libmachine: (embed-certs-789000) DBG | domain embed-certs-789000 has defined IP address 192.168.39.200 and MAC address 52:54:00:48:ae:b2 in network mk-embed-certs-789000
	I1205 20:35:47.409107  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHPort
	I1205 20:35:47.409276  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHKeyPath
	I1205 20:35:47.409436  585113 main.go:141] libmachine: (embed-certs-789000) Calling .GetSSHUsername
	I1205 20:35:47.409577  585113 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/embed-certs-789000/id_rsa Username:docker}
	I1205 20:35:47.589046  585113 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:35:47.620164  585113 node_ready.go:35] waiting up to 6m0s for node "embed-certs-789000" to be "Ready" ...
	I1205 20:35:47.635800  585113 node_ready.go:49] node "embed-certs-789000" has status "Ready":"True"
	I1205 20:35:47.635824  585113 node_ready.go:38] duration metric: took 15.625152ms for node "embed-certs-789000" to be "Ready" ...
	I1205 20:35:47.635836  585113 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:35:47.647842  585113 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6mp2h" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:47.738529  585113 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:35:47.738558  585113 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:35:47.741247  585113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:35:47.741443  585113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:35:47.822503  585113 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:35:47.822543  585113 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:35:47.886482  585113 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:35:47.886512  585113 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:35:47.926018  585113 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:35:48.100013  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:48.100059  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:48.100371  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:48.100392  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:48.100408  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:48.100416  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:48.102261  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Closing plugin on server side
	I1205 20:35:48.102313  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:48.102342  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:48.115407  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:48.115429  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:48.115762  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:48.115859  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:48.115870  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Closing plugin on server side
	I1205 20:35:48.721035  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:48.721068  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:48.721380  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:48.721400  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:48.721447  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:48.721465  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:48.721855  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Closing plugin on server side
	I1205 20:35:48.721868  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:48.721880  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:49.294512  585113 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.36844122s)
	I1205 20:35:49.294581  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:49.294598  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:49.294953  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Closing plugin on server side
	I1205 20:35:49.295014  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:49.295028  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:49.295057  585113 main.go:141] libmachine: Making call to close driver server
	I1205 20:35:49.295071  585113 main.go:141] libmachine: (embed-certs-789000) Calling .Close
	I1205 20:35:49.295341  585113 main.go:141] libmachine: (embed-certs-789000) DBG | Closing plugin on server side
	I1205 20:35:49.295391  585113 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:35:49.295403  585113 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:35:49.295414  585113 addons.go:475] Verifying addon metrics-server=true in "embed-certs-789000"
	I1205 20:35:49.297183  585113 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:35:49.298509  585113 addons.go:510] duration metric: took 1.965140064s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:35:49.657195  585113 pod_ready.go:103] pod "coredns-7c65d6cfc9-6mp2h" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:47.121445  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:49.622568  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:50.163712  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:52.165654  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:52.155012  585113 pod_ready.go:103] pod "coredns-7c65d6cfc9-6mp2h" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:54.155309  585113 pod_ready.go:93] pod "coredns-7c65d6cfc9-6mp2h" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:54.155346  585113 pod_ready.go:82] duration metric: took 6.507465102s for pod "coredns-7c65d6cfc9-6mp2h" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:54.155356  585113 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rh6pj" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:54.160866  585113 pod_ready.go:93] pod "coredns-7c65d6cfc9-rh6pj" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:54.160895  585113 pod_ready.go:82] duration metric: took 5.529623ms for pod "coredns-7c65d6cfc9-rh6pj" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:54.160909  585113 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:54.166444  585113 pod_ready.go:93] pod "etcd-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:54.166475  585113 pod_ready.go:82] duration metric: took 5.558605ms for pod "etcd-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:54.166487  585113 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:52.118202  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:54.119543  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:54.664661  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:57.162802  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:56.172832  585113 pod_ready.go:103] pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:57.173005  585113 pod_ready.go:93] pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:57.173052  585113 pod_ready.go:82] duration metric: took 3.006542827s for pod "kube-apiserver-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.173068  585113 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.178461  585113 pod_ready.go:93] pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:57.178489  585113 pod_ready.go:82] duration metric: took 5.413563ms for pod "kube-controller-manager-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.178499  585113 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-znjpk" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.183130  585113 pod_ready.go:93] pod "kube-proxy-znjpk" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:57.183162  585113 pod_ready.go:82] duration metric: took 4.655743ms for pod "kube-proxy-znjpk" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.183178  585113 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.351816  585113 pod_ready.go:93] pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace has status "Ready":"True"
	I1205 20:35:57.351842  585113 pod_ready.go:82] duration metric: took 168.656328ms for pod "kube-scheduler-embed-certs-789000" in "kube-system" namespace to be "Ready" ...
	I1205 20:35:57.351851  585113 pod_ready.go:39] duration metric: took 9.716003373s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:35:57.351866  585113 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:35:57.351921  585113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:35:57.368439  585113 api_server.go:72] duration metric: took 10.035127798s to wait for apiserver process to appear ...
	I1205 20:35:57.368471  585113 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:35:57.368496  585113 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I1205 20:35:57.372531  585113 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I1205 20:35:57.373449  585113 api_server.go:141] control plane version: v1.31.2
	I1205 20:35:57.373466  585113 api_server.go:131] duration metric: took 4.987422ms to wait for apiserver health ...
	I1205 20:35:57.373474  585113 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:35:57.554591  585113 system_pods.go:59] 9 kube-system pods found
	I1205 20:35:57.554620  585113 system_pods.go:61] "coredns-7c65d6cfc9-6mp2h" [01aaefd9-c549-4065-b3dd-a0e4d925e592] Running
	I1205 20:35:57.554625  585113 system_pods.go:61] "coredns-7c65d6cfc9-rh6pj" [4bdd8a47-abec-4dc4-a1ed-4a9a124417a3] Running
	I1205 20:35:57.554629  585113 system_pods.go:61] "etcd-embed-certs-789000" [356d7981-ab7a-40bf-866f-0285986f9a8d] Running
	I1205 20:35:57.554633  585113 system_pods.go:61] "kube-apiserver-embed-certs-789000" [bddc43d8-26f1-462b-a90b-8a4093bbb427] Running
	I1205 20:35:57.554637  585113 system_pods.go:61] "kube-controller-manager-embed-certs-789000" [800f92d7-e6e2-4cb8-9cc7-90595f4b512b] Running
	I1205 20:35:57.554640  585113 system_pods.go:61] "kube-proxy-znjpk" [f3df1a22-d7e0-4a83-84dd-0e710185ded6] Running
	I1205 20:35:57.554643  585113 system_pods.go:61] "kube-scheduler-embed-certs-789000" [327e3f02-3092-49fb-bfac-fc0485f02db3] Running
	I1205 20:35:57.554649  585113 system_pods.go:61] "metrics-server-6867b74b74-cs42k" [98b266c3-8ff0-4dc6-9c43-374dcd7c074a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:35:57.554653  585113 system_pods.go:61] "storage-provisioner" [2808c8da-8904-45a0-ae68-bfd68681540f] Running
	I1205 20:35:57.554660  585113 system_pods.go:74] duration metric: took 181.180919ms to wait for pod list to return data ...
	I1205 20:35:57.554667  585113 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:35:57.757196  585113 default_sa.go:45] found service account: "default"
	I1205 20:35:57.757226  585113 default_sa.go:55] duration metric: took 202.553823ms for default service account to be created ...
	I1205 20:35:57.757236  585113 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:35:57.956943  585113 system_pods.go:86] 9 kube-system pods found
	I1205 20:35:57.956976  585113 system_pods.go:89] "coredns-7c65d6cfc9-6mp2h" [01aaefd9-c549-4065-b3dd-a0e4d925e592] Running
	I1205 20:35:57.956982  585113 system_pods.go:89] "coredns-7c65d6cfc9-rh6pj" [4bdd8a47-abec-4dc4-a1ed-4a9a124417a3] Running
	I1205 20:35:57.956985  585113 system_pods.go:89] "etcd-embed-certs-789000" [356d7981-ab7a-40bf-866f-0285986f9a8d] Running
	I1205 20:35:57.956989  585113 system_pods.go:89] "kube-apiserver-embed-certs-789000" [bddc43d8-26f1-462b-a90b-8a4093bbb427] Running
	I1205 20:35:57.956992  585113 system_pods.go:89] "kube-controller-manager-embed-certs-789000" [800f92d7-e6e2-4cb8-9cc7-90595f4b512b] Running
	I1205 20:35:57.956996  585113 system_pods.go:89] "kube-proxy-znjpk" [f3df1a22-d7e0-4a83-84dd-0e710185ded6] Running
	I1205 20:35:57.956999  585113 system_pods.go:89] "kube-scheduler-embed-certs-789000" [327e3f02-3092-49fb-bfac-fc0485f02db3] Running
	I1205 20:35:57.957005  585113 system_pods.go:89] "metrics-server-6867b74b74-cs42k" [98b266c3-8ff0-4dc6-9c43-374dcd7c074a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:35:57.957010  585113 system_pods.go:89] "storage-provisioner" [2808c8da-8904-45a0-ae68-bfd68681540f] Running
	I1205 20:35:57.957019  585113 system_pods.go:126] duration metric: took 199.777723ms to wait for k8s-apps to be running ...
	I1205 20:35:57.957028  585113 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:35:57.957079  585113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:35:57.971959  585113 system_svc.go:56] duration metric: took 14.916307ms WaitForService to wait for kubelet
	I1205 20:35:57.972000  585113 kubeadm.go:582] duration metric: took 10.638693638s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:35:57.972027  585113 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:35:58.153272  585113 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:35:58.153302  585113 node_conditions.go:123] node cpu capacity is 2
	I1205 20:35:58.153323  585113 node_conditions.go:105] duration metric: took 181.282208ms to run NodePressure ...
	I1205 20:35:58.153338  585113 start.go:241] waiting for startup goroutines ...
	I1205 20:35:58.153348  585113 start.go:246] waiting for cluster config update ...
	I1205 20:35:58.153361  585113 start.go:255] writing updated cluster config ...
	I1205 20:35:58.153689  585113 ssh_runner.go:195] Run: rm -f paused
	I1205 20:35:58.206377  585113 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 20:35:58.208199  585113 out.go:177] * Done! kubectl is now configured to use "embed-certs-789000" cluster and "default" namespace by default
	I1205 20:35:56.626799  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:59.119621  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:35:59.164803  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:01.663254  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:01.119680  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:03.121023  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:05.121537  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:02.025194  585602 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 20:36:02.025306  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:36:02.025498  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:36:03.664172  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:05.672410  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:07.623229  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:10.119845  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:07.025608  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:36:07.025922  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:36:08.164875  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:10.665374  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:12.622566  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:15.120084  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:13.163662  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:15.164021  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:17.164514  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:17.619629  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:19.620524  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:17.026490  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:36:17.026747  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:36:19.663904  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:22.164514  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:21.621019  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:24.119524  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:24.164932  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:26.670748  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:26.119795  585025 pod_ready.go:103] pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:27.113870  585025 pod_ready.go:82] duration metric: took 4m0.000886242s for pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace to be "Ready" ...
	E1205 20:36:27.113920  585025 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-vjwq2" in "kube-system" namespace to be "Ready" (will not retry!)
	I1205 20:36:27.113943  585025 pod_ready.go:39] duration metric: took 4m14.547292745s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:36:27.113975  585025 kubeadm.go:597] duration metric: took 4m21.939840666s to restartPrimaryControlPlane
	W1205 20:36:27.114068  585025 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 20:36:27.114099  585025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:36:29.163499  585929 pod_ready.go:103] pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace has status "Ready":"False"
	I1205 20:36:29.664158  585929 pod_ready.go:82] duration metric: took 4m0.007168384s for pod "metrics-server-6867b74b74-rq8xm" in "kube-system" namespace to be "Ready" ...
	E1205 20:36:29.664191  585929 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 20:36:29.664201  585929 pod_ready.go:39] duration metric: took 4m2.00733866s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:36:29.664226  585929 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:36:29.664290  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:36:29.664377  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:36:29.712790  585929 cri.go:89] found id: "83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:29.712814  585929 cri.go:89] found id: "e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:29.712819  585929 cri.go:89] found id: ""
	I1205 20:36:29.712826  585929 logs.go:282] 2 containers: [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36]
	I1205 20:36:29.712879  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.717751  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.721968  585929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:36:29.722045  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:36:29.770289  585929 cri.go:89] found id: "62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:29.770322  585929 cri.go:89] found id: ""
	I1205 20:36:29.770330  585929 logs.go:282] 1 containers: [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff]
	I1205 20:36:29.770392  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.775391  585929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:36:29.775475  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:36:29.816354  585929 cri.go:89] found id: "dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:29.816380  585929 cri.go:89] found id: ""
	I1205 20:36:29.816388  585929 logs.go:282] 1 containers: [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f]
	I1205 20:36:29.816454  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.821546  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:36:29.821621  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:36:29.870442  585929 cri.go:89] found id: "40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:29.870467  585929 cri.go:89] found id: ""
	I1205 20:36:29.870476  585929 logs.go:282] 1 containers: [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d]
	I1205 20:36:29.870541  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.875546  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:36:29.875658  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:36:29.924567  585929 cri.go:89] found id: "444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:29.924595  585929 cri.go:89] found id: ""
	I1205 20:36:29.924603  585929 logs.go:282] 1 containers: [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43]
	I1205 20:36:29.924666  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.929148  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:36:29.929216  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:36:29.968092  585929 cri.go:89] found id: "18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:29.968122  585929 cri.go:89] found id: "587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:29.968126  585929 cri.go:89] found id: ""
	I1205 20:36:29.968134  585929 logs.go:282] 2 containers: [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66]
	I1205 20:36:29.968186  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.973062  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:29.977693  585929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:36:29.977762  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:36:30.014944  585929 cri.go:89] found id: ""
	I1205 20:36:30.014982  585929 logs.go:282] 0 containers: []
	W1205 20:36:30.014994  585929 logs.go:284] No container was found matching "kindnet"
	I1205 20:36:30.015002  585929 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:36:30.015101  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:36:30.062304  585929 cri.go:89] found id: "e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:30.062328  585929 cri.go:89] found id: "dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:30.062332  585929 cri.go:89] found id: ""
	I1205 20:36:30.062339  585929 logs.go:282] 2 containers: [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c]
	I1205 20:36:30.062394  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:30.067152  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:30.071767  585929 logs.go:123] Gathering logs for kube-apiserver [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d] ...
	I1205 20:36:30.071788  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:30.125030  585929 logs.go:123] Gathering logs for etcd [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff] ...
	I1205 20:36:30.125069  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:30.167607  585929 logs.go:123] Gathering logs for kube-scheduler [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d] ...
	I1205 20:36:30.167641  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:30.217522  585929 logs.go:123] Gathering logs for kube-controller-manager [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c] ...
	I1205 20:36:30.217558  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:30.298655  585929 logs.go:123] Gathering logs for kube-controller-manager [587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66] ...
	I1205 20:36:30.298695  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:30.346687  585929 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:36:30.346721  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:36:30.887069  585929 logs.go:123] Gathering logs for dmesg ...
	I1205 20:36:30.887126  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:36:30.907313  585929 logs.go:123] Gathering logs for kube-apiserver [e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36] ...
	I1205 20:36:30.907360  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:30.950285  585929 logs.go:123] Gathering logs for coredns [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f] ...
	I1205 20:36:30.950326  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:30.990895  585929 logs.go:123] Gathering logs for storage-provisioner [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8] ...
	I1205 20:36:30.990929  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:31.032950  585929 logs.go:123] Gathering logs for kubelet ...
	I1205 20:36:31.033010  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:36:31.115132  585929 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:36:31.115176  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:36:31.257760  585929 logs.go:123] Gathering logs for kube-proxy [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43] ...
	I1205 20:36:31.257797  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:31.300521  585929 logs.go:123] Gathering logs for storage-provisioner [dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c] ...
	I1205 20:36:31.300553  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:31.338339  585929 logs.go:123] Gathering logs for container status ...
	I1205 20:36:31.338373  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:36:33.892406  585929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:36:33.908917  585929 api_server.go:72] duration metric: took 4m14.472283422s to wait for apiserver process to appear ...
	I1205 20:36:33.908950  585929 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:36:33.908993  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:36:33.909067  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:36:33.958461  585929 cri.go:89] found id: "83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:33.958496  585929 cri.go:89] found id: "e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:33.958502  585929 cri.go:89] found id: ""
	I1205 20:36:33.958511  585929 logs.go:282] 2 containers: [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36]
	I1205 20:36:33.958585  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:33.963333  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:33.969472  585929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:36:33.969549  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:36:34.010687  585929 cri.go:89] found id: "62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:34.010711  585929 cri.go:89] found id: ""
	I1205 20:36:34.010721  585929 logs.go:282] 1 containers: [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff]
	I1205 20:36:34.010790  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.016468  585929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:36:34.016557  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:36:34.056627  585929 cri.go:89] found id: "dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:34.056656  585929 cri.go:89] found id: ""
	I1205 20:36:34.056666  585929 logs.go:282] 1 containers: [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f]
	I1205 20:36:34.056729  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.061343  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:36:34.061411  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:36:34.099534  585929 cri.go:89] found id: "40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:34.099563  585929 cri.go:89] found id: ""
	I1205 20:36:34.099573  585929 logs.go:282] 1 containers: [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d]
	I1205 20:36:34.099643  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.104828  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:36:34.104891  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:36:34.150749  585929 cri.go:89] found id: "444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:34.150781  585929 cri.go:89] found id: ""
	I1205 20:36:34.150792  585929 logs.go:282] 1 containers: [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43]
	I1205 20:36:34.150863  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.155718  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:36:34.155797  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:36:34.202896  585929 cri.go:89] found id: "18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:34.202927  585929 cri.go:89] found id: "587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:34.202934  585929 cri.go:89] found id: ""
	I1205 20:36:34.202943  585929 logs.go:282] 2 containers: [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66]
	I1205 20:36:34.203028  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.207791  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.212163  585929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:36:34.212243  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:36:34.254423  585929 cri.go:89] found id: ""
	I1205 20:36:34.254458  585929 logs.go:282] 0 containers: []
	W1205 20:36:34.254470  585929 logs.go:284] No container was found matching "kindnet"
	I1205 20:36:34.254479  585929 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:36:34.254549  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:36:34.294704  585929 cri.go:89] found id: "e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:34.294737  585929 cri.go:89] found id: "dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:34.294741  585929 cri.go:89] found id: ""
	I1205 20:36:34.294753  585929 logs.go:282] 2 containers: [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c]
	I1205 20:36:34.294820  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.299361  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:34.305411  585929 logs.go:123] Gathering logs for kube-apiserver [e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36] ...
	I1205 20:36:34.305437  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:34.357438  585929 logs.go:123] Gathering logs for etcd [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff] ...
	I1205 20:36:34.357472  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:34.405858  585929 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:36:34.405893  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:36:34.898506  585929 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:36:34.898551  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:36:35.009818  585929 logs.go:123] Gathering logs for coredns [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f] ...
	I1205 20:36:35.009856  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:35.048852  585929 logs.go:123] Gathering logs for kube-controller-manager [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c] ...
	I1205 20:36:35.048882  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:35.100458  585929 logs.go:123] Gathering logs for kube-controller-manager [587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66] ...
	I1205 20:36:35.100511  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:35.139923  585929 logs.go:123] Gathering logs for container status ...
	I1205 20:36:35.139959  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:36:35.184818  585929 logs.go:123] Gathering logs for kubelet ...
	I1205 20:36:35.184852  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:36:35.265196  585929 logs.go:123] Gathering logs for dmesg ...
	I1205 20:36:35.265238  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:36:35.280790  585929 logs.go:123] Gathering logs for kube-proxy [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43] ...
	I1205 20:36:35.280830  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:35.323308  585929 logs.go:123] Gathering logs for storage-provisioner [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8] ...
	I1205 20:36:35.323343  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:35.364578  585929 logs.go:123] Gathering logs for kube-apiserver [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d] ...
	I1205 20:36:35.364610  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:35.411413  585929 logs.go:123] Gathering logs for kube-scheduler [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d] ...
	I1205 20:36:35.411456  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:35.458077  585929 logs.go:123] Gathering logs for storage-provisioner [dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c] ...
	I1205 20:36:35.458117  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:37.997701  585929 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8444/healthz ...
	I1205 20:36:38.003308  585929 api_server.go:279] https://192.168.50.96:8444/healthz returned 200:
	ok
	I1205 20:36:38.004465  585929 api_server.go:141] control plane version: v1.31.2
	I1205 20:36:38.004495  585929 api_server.go:131] duration metric: took 4.095536578s to wait for apiserver health ...
	I1205 20:36:38.004505  585929 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:36:38.004532  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:36:38.004598  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:36:37.027599  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:36:37.027910  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:36:38.048388  585929 cri.go:89] found id: "83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:38.048427  585929 cri.go:89] found id: "e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:38.048434  585929 cri.go:89] found id: ""
	I1205 20:36:38.048442  585929 logs.go:282] 2 containers: [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36]
	I1205 20:36:38.048514  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.052931  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.057338  585929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:36:38.057403  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:36:38.097715  585929 cri.go:89] found id: "62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:38.097750  585929 cri.go:89] found id: ""
	I1205 20:36:38.097761  585929 logs.go:282] 1 containers: [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff]
	I1205 20:36:38.097830  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.104038  585929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:36:38.104110  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:36:38.148485  585929 cri.go:89] found id: "dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:38.148510  585929 cri.go:89] found id: ""
	I1205 20:36:38.148519  585929 logs.go:282] 1 containers: [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f]
	I1205 20:36:38.148585  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.153619  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:36:38.153702  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:36:38.190467  585929 cri.go:89] found id: "40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:38.190495  585929 cri.go:89] found id: ""
	I1205 20:36:38.190505  585929 logs.go:282] 1 containers: [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d]
	I1205 20:36:38.190561  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.195177  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:36:38.195259  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:36:38.240020  585929 cri.go:89] found id: "444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:38.240045  585929 cri.go:89] found id: ""
	I1205 20:36:38.240054  585929 logs.go:282] 1 containers: [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43]
	I1205 20:36:38.240123  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.244359  585929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:36:38.244425  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:36:38.282241  585929 cri.go:89] found id: "18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:38.282267  585929 cri.go:89] found id: "587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:38.282284  585929 cri.go:89] found id: ""
	I1205 20:36:38.282292  585929 logs.go:282] 2 containers: [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66]
	I1205 20:36:38.282357  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.287437  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.291561  585929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:36:38.291621  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:36:38.333299  585929 cri.go:89] found id: ""
	I1205 20:36:38.333335  585929 logs.go:282] 0 containers: []
	W1205 20:36:38.333345  585929 logs.go:284] No container was found matching "kindnet"
	I1205 20:36:38.333352  585929 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:36:38.333411  585929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:36:38.370920  585929 cri.go:89] found id: "e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:38.370948  585929 cri.go:89] found id: "dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:38.370952  585929 cri.go:89] found id: ""
	I1205 20:36:38.370960  585929 logs.go:282] 2 containers: [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c]
	I1205 20:36:38.371037  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.375549  585929 ssh_runner.go:195] Run: which crictl
	I1205 20:36:38.379517  585929 logs.go:123] Gathering logs for kube-controller-manager [587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66] ...
	I1205 20:36:38.379548  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 587008b58cfaaf1589aba9e2620ce315217bd76f426ef28720d9d1a21770ff66"
	I1205 20:36:38.416990  585929 logs.go:123] Gathering logs for kubelet ...
	I1205 20:36:38.417023  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:36:38.499859  585929 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:36:38.499905  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:36:38.625291  585929 logs.go:123] Gathering logs for kube-scheduler [40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d] ...
	I1205 20:36:38.625332  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40accb73a4e915cb2d0573ae2535c8fdf523de0567d2f4c342fd997205c2960d"
	I1205 20:36:38.672549  585929 logs.go:123] Gathering logs for coredns [dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f] ...
	I1205 20:36:38.672586  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd7068872d39b5c3588d39942ff0a61459c81c95510bf6c3279191ca4a1bd84f"
	I1205 20:36:38.710017  585929 logs.go:123] Gathering logs for storage-provisioner [e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8] ...
	I1205 20:36:38.710055  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6ee28be86cb2447db611706d31e389150f04a551c5c2ff6e2fa71a1df9de6e8"
	I1205 20:36:38.754004  585929 logs.go:123] Gathering logs for container status ...
	I1205 20:36:38.754049  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:36:38.802163  585929 logs.go:123] Gathering logs for dmesg ...
	I1205 20:36:38.802206  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:36:38.817670  585929 logs.go:123] Gathering logs for kube-apiserver [83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d] ...
	I1205 20:36:38.817704  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83b7cd17782f81ed14e15bf7f3e86ba4b86bb9c3cc5d33e985c950d2842a034d"
	I1205 20:36:38.864833  585929 logs.go:123] Gathering logs for etcd [62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff] ...
	I1205 20:36:38.864875  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62b61ec6f08d5eaa5cf083db8b9236307a95bb1163d6cb00c2ddf1dac4ccddff"
	I1205 20:36:38.909490  585929 logs.go:123] Gathering logs for storage-provisioner [dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c] ...
	I1205 20:36:38.909526  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc7dc19930243b67a4744a4b50d4907bbd6cb8464d66150af19d1932d1ed3c2c"
	I1205 20:36:38.952117  585929 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:36:38.952164  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:36:39.347620  585929 logs.go:123] Gathering logs for kube-apiserver [e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36] ...
	I1205 20:36:39.347686  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2d9e7ffdd041b209d809cfcd00f0fa7a7c22612800ec33960b6c84fa709df36"
	I1205 20:36:39.392412  585929 logs.go:123] Gathering logs for kube-proxy [444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43] ...
	I1205 20:36:39.392450  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 444227d730d01740c9f62e602045de53afc13316e89d26193ea25388dae75b43"
	I1205 20:36:39.433711  585929 logs.go:123] Gathering logs for kube-controller-manager [18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c] ...
	I1205 20:36:39.433749  585929 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18e899b1e640c2f5244f4b8050b0d3aa3a8d303b6fbde0b5eabafd4f4e95856c"
	I1205 20:36:41.996602  585929 system_pods.go:59] 8 kube-system pods found
	I1205 20:36:41.996634  585929 system_pods.go:61] "coredns-7c65d6cfc9-5drgc" [4adbcbc8-0974-4ed3-90d4-fc7f75ff83b6] Running
	I1205 20:36:41.996640  585929 system_pods.go:61] "etcd-default-k8s-diff-port-942599" [4041a965-abf4-45b3-a180-118601e72573] Running
	I1205 20:36:41.996644  585929 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-942599" [ae1d7788-4feb-4e02-b0b2-bcaff984ff99] Running
	I1205 20:36:41.996648  585929 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-942599" [5cfb734e-5a10-4066-95a1-b884817a0aea] Running
	I1205 20:36:41.996651  585929 system_pods.go:61] "kube-proxy-5vdcq" [be2e18fd-6980-45c9-87a4-f6d1ed31bf7b] Running
	I1205 20:36:41.996654  585929 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-942599" [8deda727-a6c3-4523-8755-76217f6a8ddb] Running
	I1205 20:36:41.996661  585929 system_pods.go:61] "metrics-server-6867b74b74-rq8xm" [99b577fd-fbfd-4178-8b06-ef96f118c30b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:36:41.996665  585929 system_pods.go:61] "storage-provisioner" [8a858ec2-dc10-4501-8efa-72e2ea0c7927] Running
	I1205 20:36:41.996674  585929 system_pods.go:74] duration metric: took 3.992162062s to wait for pod list to return data ...
	I1205 20:36:41.996682  585929 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:36:41.999553  585929 default_sa.go:45] found service account: "default"
	I1205 20:36:41.999580  585929 default_sa.go:55] duration metric: took 2.889197ms for default service account to be created ...
	I1205 20:36:41.999589  585929 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:36:42.005061  585929 system_pods.go:86] 8 kube-system pods found
	I1205 20:36:42.005099  585929 system_pods.go:89] "coredns-7c65d6cfc9-5drgc" [4adbcbc8-0974-4ed3-90d4-fc7f75ff83b6] Running
	I1205 20:36:42.005111  585929 system_pods.go:89] "etcd-default-k8s-diff-port-942599" [4041a965-abf4-45b3-a180-118601e72573] Running
	I1205 20:36:42.005118  585929 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-942599" [ae1d7788-4feb-4e02-b0b2-bcaff984ff99] Running
	I1205 20:36:42.005126  585929 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-942599" [5cfb734e-5a10-4066-95a1-b884817a0aea] Running
	I1205 20:36:42.005135  585929 system_pods.go:89] "kube-proxy-5vdcq" [be2e18fd-6980-45c9-87a4-f6d1ed31bf7b] Running
	I1205 20:36:42.005143  585929 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-942599" [8deda727-a6c3-4523-8755-76217f6a8ddb] Running
	I1205 20:36:42.005159  585929 system_pods.go:89] "metrics-server-6867b74b74-rq8xm" [99b577fd-fbfd-4178-8b06-ef96f118c30b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:36:42.005171  585929 system_pods.go:89] "storage-provisioner" [8a858ec2-dc10-4501-8efa-72e2ea0c7927] Running
	I1205 20:36:42.005187  585929 system_pods.go:126] duration metric: took 5.591652ms to wait for k8s-apps to be running ...
	I1205 20:36:42.005201  585929 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:36:42.005267  585929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:36:42.021323  585929 system_svc.go:56] duration metric: took 16.10852ms WaitForService to wait for kubelet
	I1205 20:36:42.021358  585929 kubeadm.go:582] duration metric: took 4m22.584731606s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:36:42.021424  585929 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:36:42.024632  585929 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:36:42.024658  585929 node_conditions.go:123] node cpu capacity is 2
	I1205 20:36:42.024682  585929 node_conditions.go:105] duration metric: took 3.248548ms to run NodePressure ...
	I1205 20:36:42.024698  585929 start.go:241] waiting for startup goroutines ...
	I1205 20:36:42.024709  585929 start.go:246] waiting for cluster config update ...
	I1205 20:36:42.024742  585929 start.go:255] writing updated cluster config ...
	I1205 20:36:42.025047  585929 ssh_runner.go:195] Run: rm -f paused
	I1205 20:36:42.077303  585929 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 20:36:42.079398  585929 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-942599" cluster and "default" namespace by default
	I1205 20:36:53.411276  585025 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.297141231s)
	I1205 20:36:53.411423  585025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:36:53.432474  585025 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:36:53.443908  585025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:36:53.454789  585025 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:36:53.454821  585025 kubeadm.go:157] found existing configuration files:
	
	I1205 20:36:53.454873  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:36:53.465648  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:36:53.465719  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:36:53.476492  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:36:53.486436  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:36:53.486505  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:36:53.499146  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:36:53.510237  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:36:53.510324  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:36:53.521186  585025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:36:53.531797  585025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:36:53.531890  585025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:36:53.543056  585025 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:36:53.735019  585025 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:37:01.531096  585025 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 20:37:01.531179  585025 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:37:01.531278  585025 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:37:01.531407  585025 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:37:01.531546  585025 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 20:37:01.531635  585025 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:37:01.533284  585025 out.go:235]   - Generating certificates and keys ...
	I1205 20:37:01.533400  585025 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:37:01.533484  585025 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:37:01.533589  585025 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:37:01.533676  585025 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:37:01.533741  585025 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:37:01.533820  585025 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 20:37:01.533901  585025 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:37:01.533954  585025 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:37:01.534023  585025 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:37:01.534097  585025 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:37:01.534137  585025 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 20:37:01.534193  585025 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:37:01.534264  585025 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:37:01.534347  585025 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 20:37:01.534414  585025 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:37:01.534479  585025 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:37:01.534529  585025 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:37:01.534600  585025 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:37:01.534656  585025 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:37:01.536208  585025 out.go:235]   - Booting up control plane ...
	I1205 20:37:01.536326  585025 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:37:01.536394  585025 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:37:01.536487  585025 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:37:01.536653  585025 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:37:01.536772  585025 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:37:01.536814  585025 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:37:01.536987  585025 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 20:37:01.537144  585025 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 20:37:01.537240  585025 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.640403ms
	I1205 20:37:01.537352  585025 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 20:37:01.537438  585025 kubeadm.go:310] [api-check] The API server is healthy after 5.002069704s
	I1205 20:37:01.537566  585025 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:37:01.537705  585025 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:37:01.537766  585025 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:37:01.537959  585025 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-816185 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:37:01.538037  585025 kubeadm.go:310] [bootstrap-token] Using token: l8cx4j.koqnwrdaqrc08irs
	I1205 20:37:01.539683  585025 out.go:235]   - Configuring RBAC rules ...
	I1205 20:37:01.539813  585025 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:37:01.539945  585025 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:37:01.540157  585025 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:37:01.540346  585025 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:37:01.540482  585025 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:37:01.540602  585025 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:37:01.540746  585025 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:37:01.540818  585025 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 20:37:01.540905  585025 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 20:37:01.540922  585025 kubeadm.go:310] 
	I1205 20:37:01.541012  585025 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 20:37:01.541027  585025 kubeadm.go:310] 
	I1205 20:37:01.541149  585025 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 20:37:01.541160  585025 kubeadm.go:310] 
	I1205 20:37:01.541197  585025 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 20:37:01.541253  585025 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:37:01.541297  585025 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:37:01.541303  585025 kubeadm.go:310] 
	I1205 20:37:01.541365  585025 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 20:37:01.541371  585025 kubeadm.go:310] 
	I1205 20:37:01.541417  585025 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:37:01.541427  585025 kubeadm.go:310] 
	I1205 20:37:01.541486  585025 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 20:37:01.541593  585025 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:37:01.541689  585025 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:37:01.541707  585025 kubeadm.go:310] 
	I1205 20:37:01.541811  585025 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:37:01.541917  585025 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 20:37:01.541928  585025 kubeadm.go:310] 
	I1205 20:37:01.542020  585025 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token l8cx4j.koqnwrdaqrc08irs \
	I1205 20:37:01.542138  585025 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 \
	I1205 20:37:01.542171  585025 kubeadm.go:310] 	--control-plane 
	I1205 20:37:01.542180  585025 kubeadm.go:310] 
	I1205 20:37:01.542264  585025 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:37:01.542283  585025 kubeadm.go:310] 
	I1205 20:37:01.542407  585025 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token l8cx4j.koqnwrdaqrc08irs \
	I1205 20:37:01.542513  585025 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0184fc4f1a90720b1d1563bffde1232169429577f60bd1edcb3fed601e87dcb8 
	I1205 20:37:01.542530  585025 cni.go:84] Creating CNI manager for ""
	I1205 20:37:01.542538  585025 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:37:01.543967  585025 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:37:01.545652  585025 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:37:01.557890  585025 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:37:01.577447  585025 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:37:01.577532  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-816185 minikube.k8s.io/updated_at=2024_12_05T20_37_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=no-preload-816185 minikube.k8s.io/primary=true
	I1205 20:37:01.577542  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:01.618121  585025 ops.go:34] apiserver oom_adj: -16
	I1205 20:37:01.806825  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:02.307212  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:02.807893  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:03.307202  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:03.806891  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:04.307571  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:04.807485  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:05.307695  585025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:37:05.387751  585025 kubeadm.go:1113] duration metric: took 3.810307917s to wait for elevateKubeSystemPrivileges
	I1205 20:37:05.387790  585025 kubeadm.go:394] duration metric: took 5m0.269375789s to StartCluster
	I1205 20:37:05.387810  585025 settings.go:142] acquiring lock: {Name:mk53b9e6d652790a330d8f10370186624dd74692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:37:05.387891  585025 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:37:05.389703  585025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/kubeconfig: {Name:mk4b6ed1146527b814b1b3f267267ef059e38543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:37:05.389984  585025 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.37 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:37:05.390056  585025 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 20:37:05.390179  585025 config.go:182] Loaded profile config "no-preload-816185": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:37:05.390193  585025 addons.go:69] Setting storage-provisioner=true in profile "no-preload-816185"
	I1205 20:37:05.390216  585025 addons.go:69] Setting default-storageclass=true in profile "no-preload-816185"
	I1205 20:37:05.390246  585025 addons.go:69] Setting metrics-server=true in profile "no-preload-816185"
	I1205 20:37:05.390281  585025 addons.go:234] Setting addon metrics-server=true in "no-preload-816185"
	W1205 20:37:05.390295  585025 addons.go:243] addon metrics-server should already be in state true
	I1205 20:37:05.390340  585025 host.go:66] Checking if "no-preload-816185" exists ...
	I1205 20:37:05.390255  585025 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-816185"
	I1205 20:37:05.390263  585025 addons.go:234] Setting addon storage-provisioner=true in "no-preload-816185"
	W1205 20:37:05.390463  585025 addons.go:243] addon storage-provisioner should already be in state true
	I1205 20:37:05.390533  585025 host.go:66] Checking if "no-preload-816185" exists ...
	I1205 20:37:05.390844  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.390888  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.390852  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.390947  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.390973  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.391032  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.391810  585025 out.go:177] * Verifying Kubernetes components...
	I1205 20:37:05.393274  585025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:37:05.408078  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40259
	I1205 20:37:05.408366  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I1205 20:37:05.408765  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.408780  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.409315  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.409337  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.409441  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.409465  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.409767  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.409800  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.409941  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetState
	I1205 20:37:05.410249  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42147
	I1205 20:37:05.410487  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.410537  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.410753  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.411387  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.411412  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.411847  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.412515  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.412565  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.413770  585025 addons.go:234] Setting addon default-storageclass=true in "no-preload-816185"
	W1205 20:37:05.413796  585025 addons.go:243] addon default-storageclass should already be in state true
	I1205 20:37:05.413828  585025 host.go:66] Checking if "no-preload-816185" exists ...
	I1205 20:37:05.414184  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.414231  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.430214  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33287
	I1205 20:37:05.430684  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.431260  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.431286  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.431697  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.431929  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetState
	I1205 20:37:05.432941  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36939
	I1205 20:37:05.433361  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.433835  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.433855  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.433933  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:37:05.434385  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.434596  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetState
	I1205 20:37:05.434638  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37163
	I1205 20:37:05.435193  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.435667  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.435694  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.435994  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.436000  585025 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:37:05.436635  585025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:05.436657  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:37:05.436683  585025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:05.437421  585025 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:37:05.437441  585025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:37:05.437461  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:37:05.438221  585025 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:37:05.439704  585025 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:37:05.439721  585025 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:37:05.439737  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:37:05.440522  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.441031  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:37:05.441058  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.441198  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:37:05.441352  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:37:05.441458  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:37:05.441582  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:37:05.445842  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.446223  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:37:05.446248  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.446449  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:37:05.446661  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:37:05.446806  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:37:05.446923  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:37:05.472870  585025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38029
	I1205 20:37:05.473520  585025 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:05.474053  585025 main.go:141] libmachine: Using API Version  1
	I1205 20:37:05.474080  585025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:05.474456  585025 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:05.474666  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetState
	I1205 20:37:05.476603  585025 main.go:141] libmachine: (no-preload-816185) Calling .DriverName
	I1205 20:37:05.476836  585025 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:37:05.476859  585025 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:37:05.476886  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHHostname
	I1205 20:37:05.480063  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.480546  585025 main.go:141] libmachine: (no-preload-816185) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:85:a7", ip: ""} in network mk-no-preload-816185: {Iface:virbr3 ExpiryTime:2024-12-05 21:31:40 +0000 UTC Type:0 Mac:52:54:00:5f:85:a7 Iaid: IPaddr:192.168.61.37 Prefix:24 Hostname:no-preload-816185 Clientid:01:52:54:00:5f:85:a7}
	I1205 20:37:05.480580  585025 main.go:141] libmachine: (no-preload-816185) DBG | domain no-preload-816185 has defined IP address 192.168.61.37 and MAC address 52:54:00:5f:85:a7 in network mk-no-preload-816185
	I1205 20:37:05.480941  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHPort
	I1205 20:37:05.481175  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHKeyPath
	I1205 20:37:05.481331  585025 main.go:141] libmachine: (no-preload-816185) Calling .GetSSHUsername
	I1205 20:37:05.481425  585025 sshutil.go:53] new ssh client: &{IP:192.168.61.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/no-preload-816185/id_rsa Username:docker}
	I1205 20:37:05.607284  585025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:37:05.627090  585025 node_ready.go:35] waiting up to 6m0s for node "no-preload-816185" to be "Ready" ...
	I1205 20:37:05.637577  585025 node_ready.go:49] node "no-preload-816185" has status "Ready":"True"
	I1205 20:37:05.637602  585025 node_ready.go:38] duration metric: took 10.476209ms for node "no-preload-816185" to be "Ready" ...
	I1205 20:37:05.637611  585025 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:37:05.642969  585025 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:05.696662  585025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:37:05.725276  585025 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:37:05.725309  585025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:37:05.779102  585025 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:37:05.779137  585025 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:37:05.814495  585025 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:37:05.814531  585025 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:37:05.823828  585025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:37:05.863152  585025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:37:05.948854  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:05.948895  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:05.949242  585025 main.go:141] libmachine: (no-preload-816185) DBG | Closing plugin on server side
	I1205 20:37:05.949266  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:05.949275  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:05.949294  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:05.949302  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:05.949590  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:05.949601  585025 main.go:141] libmachine: (no-preload-816185) DBG | Closing plugin on server side
	I1205 20:37:05.949612  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:05.975655  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:05.975683  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:05.975962  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:05.975978  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:07.004027  585025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.180164032s)
	I1205 20:37:07.004103  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:07.004117  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:07.004498  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:07.004520  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:07.004535  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:07.004545  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:07.004802  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:07.004820  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:07.208032  585025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.344819218s)
	I1205 20:37:07.208143  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:07.208159  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:07.208537  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:07.208556  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:07.208566  585025 main.go:141] libmachine: Making call to close driver server
	I1205 20:37:07.208573  585025 main.go:141] libmachine: (no-preload-816185) Calling .Close
	I1205 20:37:07.208846  585025 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:37:07.208860  585025 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:37:07.208871  585025 addons.go:475] Verifying addon metrics-server=true in "no-preload-816185"
	I1205 20:37:07.210487  585025 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:37:07.212093  585025 addons.go:510] duration metric: took 1.822047986s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:37:07.658678  585025 pod_ready.go:103] pod "etcd-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:37:08.156061  585025 pod_ready.go:93] pod "etcd-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:08.156094  585025 pod_ready.go:82] duration metric: took 2.513098547s for pod "etcd-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:08.156109  585025 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:10.162704  585025 pod_ready.go:103] pod "kube-apiserver-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:37:12.163550  585025 pod_ready.go:93] pod "kube-apiserver-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:12.163578  585025 pod_ready.go:82] duration metric: took 4.007461295s for pod "kube-apiserver-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:12.163601  585025 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:12.169123  585025 pod_ready.go:93] pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:12.169155  585025 pod_ready.go:82] duration metric: took 5.544964ms for pod "kube-controller-manager-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:12.169170  585025 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:14.175288  585025 pod_ready.go:103] pod "kube-scheduler-no-preload-816185" in "kube-system" namespace has status "Ready":"False"
	I1205 20:37:14.676107  585025 pod_ready.go:93] pod "kube-scheduler-no-preload-816185" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:14.676137  585025 pod_ready.go:82] duration metric: took 2.506959209s for pod "kube-scheduler-no-preload-816185" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:14.676146  585025 pod_ready.go:39] duration metric: took 9.038525731s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:37:14.676165  585025 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:37:14.676222  585025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:37:14.692508  585025 api_server.go:72] duration metric: took 9.302489277s to wait for apiserver process to appear ...
	I1205 20:37:14.692540  585025 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:37:14.692562  585025 api_server.go:253] Checking apiserver healthz at https://192.168.61.37:8443/healthz ...
	I1205 20:37:14.697176  585025 api_server.go:279] https://192.168.61.37:8443/healthz returned 200:
	ok
	I1205 20:37:14.698320  585025 api_server.go:141] control plane version: v1.31.2
	I1205 20:37:14.698345  585025 api_server.go:131] duration metric: took 5.796971ms to wait for apiserver health ...
	I1205 20:37:14.698357  585025 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:37:14.706456  585025 system_pods.go:59] 9 kube-system pods found
	I1205 20:37:14.706503  585025 system_pods.go:61] "coredns-7c65d6cfc9-fmcnh" [fb6a91c8-af65-4fb6-af77-0a6c45d224a7] Running
	I1205 20:37:14.706512  585025 system_pods.go:61] "coredns-7c65d6cfc9-gmc2j" [2bfc0f96-5ad3-42c7-ab2c-4a29cbeab20f] Running
	I1205 20:37:14.706518  585025 system_pods.go:61] "etcd-no-preload-816185" [b647e785-c865-47d9-9215-4b92783df8f0] Running
	I1205 20:37:14.706524  585025 system_pods.go:61] "kube-apiserver-no-preload-816185" [a4d257bd-3d3b-4833-9edd-7a7f764d9482] Running
	I1205 20:37:14.706529  585025 system_pods.go:61] "kube-controller-manager-no-preload-816185" [0487e25d-77df-4ab1-81a0-18c09d1b7f60] Running
	I1205 20:37:14.706534  585025 system_pods.go:61] "kube-proxy-q8thq" [8be5b50a-e564-4d80-82c4-357db41a3c1e] Running
	I1205 20:37:14.706539  585025 system_pods.go:61] "kube-scheduler-no-preload-816185" [187898da-a8e3-4ce1-9f70-d581133bef49] Running
	I1205 20:37:14.706549  585025 system_pods.go:61] "metrics-server-6867b74b74-8vmd6" [d838e6e3-bd74-4653-9289-4f5375b03d4f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:37:14.706555  585025 system_pods.go:61] "storage-provisioner" [7f33e249-9330-428f-8feb-9f3cf44369be] Running
	I1205 20:37:14.706565  585025 system_pods.go:74] duration metric: took 8.200516ms to wait for pod list to return data ...
	I1205 20:37:14.706577  585025 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:37:14.716217  585025 default_sa.go:45] found service account: "default"
	I1205 20:37:14.716259  585025 default_sa.go:55] duration metric: took 9.664045ms for default service account to be created ...
	I1205 20:37:14.716293  585025 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:37:14.723293  585025 system_pods.go:86] 9 kube-system pods found
	I1205 20:37:14.723323  585025 system_pods.go:89] "coredns-7c65d6cfc9-fmcnh" [fb6a91c8-af65-4fb6-af77-0a6c45d224a7] Running
	I1205 20:37:14.723329  585025 system_pods.go:89] "coredns-7c65d6cfc9-gmc2j" [2bfc0f96-5ad3-42c7-ab2c-4a29cbeab20f] Running
	I1205 20:37:14.723333  585025 system_pods.go:89] "etcd-no-preload-816185" [b647e785-c865-47d9-9215-4b92783df8f0] Running
	I1205 20:37:14.723337  585025 system_pods.go:89] "kube-apiserver-no-preload-816185" [a4d257bd-3d3b-4833-9edd-7a7f764d9482] Running
	I1205 20:37:14.723342  585025 system_pods.go:89] "kube-controller-manager-no-preload-816185" [0487e25d-77df-4ab1-81a0-18c09d1b7f60] Running
	I1205 20:37:14.723346  585025 system_pods.go:89] "kube-proxy-q8thq" [8be5b50a-e564-4d80-82c4-357db41a3c1e] Running
	I1205 20:37:14.723349  585025 system_pods.go:89] "kube-scheduler-no-preload-816185" [187898da-a8e3-4ce1-9f70-d581133bef49] Running
	I1205 20:37:14.723355  585025 system_pods.go:89] "metrics-server-6867b74b74-8vmd6" [d838e6e3-bd74-4653-9289-4f5375b03d4f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:37:14.723360  585025 system_pods.go:89] "storage-provisioner" [7f33e249-9330-428f-8feb-9f3cf44369be] Running
	I1205 20:37:14.723368  585025 system_pods.go:126] duration metric: took 7.067824ms to wait for k8s-apps to be running ...
	I1205 20:37:14.723375  585025 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:37:14.723422  585025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:37:14.744142  585025 system_svc.go:56] duration metric: took 20.751867ms WaitForService to wait for kubelet
	I1205 20:37:14.744179  585025 kubeadm.go:582] duration metric: took 9.354165706s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:37:14.744200  585025 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:37:14.751985  585025 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:37:14.752026  585025 node_conditions.go:123] node cpu capacity is 2
	I1205 20:37:14.752043  585025 node_conditions.go:105] duration metric: took 7.836665ms to run NodePressure ...
	I1205 20:37:14.752069  585025 start.go:241] waiting for startup goroutines ...
	I1205 20:37:14.752081  585025 start.go:246] waiting for cluster config update ...
	I1205 20:37:14.752095  585025 start.go:255] writing updated cluster config ...
	I1205 20:37:14.752490  585025 ssh_runner.go:195] Run: rm -f paused
	I1205 20:37:14.806583  585025 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 20:37:14.808574  585025 out.go:177] * Done! kubectl is now configured to use "no-preload-816185" cluster and "default" namespace by default
	I1205 20:37:17.029681  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:37:17.029940  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:37:17.029963  585602 kubeadm.go:310] 
	I1205 20:37:17.030022  585602 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 20:37:17.030101  585602 kubeadm.go:310] 		timed out waiting for the condition
	I1205 20:37:17.030128  585602 kubeadm.go:310] 
	I1205 20:37:17.030167  585602 kubeadm.go:310] 	This error is likely caused by:
	I1205 20:37:17.030209  585602 kubeadm.go:310] 		- The kubelet is not running
	I1205 20:37:17.030353  585602 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 20:37:17.030369  585602 kubeadm.go:310] 
	I1205 20:37:17.030489  585602 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 20:37:17.030540  585602 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 20:37:17.030584  585602 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 20:37:17.030594  585602 kubeadm.go:310] 
	I1205 20:37:17.030733  585602 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 20:37:17.030843  585602 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 20:37:17.030855  585602 kubeadm.go:310] 
	I1205 20:37:17.031025  585602 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 20:37:17.031154  585602 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 20:37:17.031268  585602 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 20:37:17.031374  585602 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 20:37:17.031386  585602 kubeadm.go:310] 
	I1205 20:37:17.032368  585602 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:37:17.032493  585602 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 20:37:17.032562  585602 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1205 20:37:17.032709  585602 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 20:37:17.032762  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:37:17.518572  585602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:37:17.533868  585602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:37:17.547199  585602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:37:17.547224  585602 kubeadm.go:157] found existing configuration files:
	
	I1205 20:37:17.547272  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:37:17.556733  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:37:17.556801  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:37:17.566622  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:37:17.577044  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:37:17.577121  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:37:17.588726  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:37:17.599269  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:37:17.599346  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:37:17.609243  585602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:37:17.618947  585602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:37:17.619034  585602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:37:17.629228  585602 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:37:17.878785  585602 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:39:13.972213  585602 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 20:39:13.972379  585602 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1205 20:39:13.973936  585602 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 20:39:13.974035  585602 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:39:13.974150  585602 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:39:13.974251  585602 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:39:13.974341  585602 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:39:13.974404  585602 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:39:13.976164  585602 out.go:235]   - Generating certificates and keys ...
	I1205 20:39:13.976248  585602 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:39:13.976339  585602 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:39:13.976449  585602 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:39:13.976538  585602 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:39:13.976642  585602 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:39:13.976736  585602 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 20:39:13.976832  585602 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:39:13.976924  585602 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:39:13.977025  585602 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:39:13.977131  585602 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:39:13.977189  585602 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 20:39:13.977272  585602 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:39:13.977389  585602 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:39:13.977474  585602 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:39:13.977566  585602 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:39:13.977650  585602 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:39:13.977776  585602 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:39:13.977901  585602 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:39:13.977976  585602 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:39:13.978137  585602 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:39:13.979473  585602 out.go:235]   - Booting up control plane ...
	I1205 20:39:13.979581  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:39:13.979664  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:39:13.979732  585602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:39:13.979803  585602 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:39:13.979952  585602 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:39:13.980017  585602 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 20:39:13.980107  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.980396  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.980511  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.980744  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.980843  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.981116  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.981227  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.981439  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.981528  585602 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 20:39:13.981718  585602 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 20:39:13.981731  585602 kubeadm.go:310] 
	I1205 20:39:13.981773  585602 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 20:39:13.981831  585602 kubeadm.go:310] 		timed out waiting for the condition
	I1205 20:39:13.981839  585602 kubeadm.go:310] 
	I1205 20:39:13.981888  585602 kubeadm.go:310] 	This error is likely caused by:
	I1205 20:39:13.981941  585602 kubeadm.go:310] 		- The kubelet is not running
	I1205 20:39:13.982052  585602 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 20:39:13.982059  585602 kubeadm.go:310] 
	I1205 20:39:13.982144  585602 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 20:39:13.982174  585602 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 20:39:13.982208  585602 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 20:39:13.982215  585602 kubeadm.go:310] 
	I1205 20:39:13.982302  585602 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 20:39:13.982415  585602 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 20:39:13.982431  585602 kubeadm.go:310] 
	I1205 20:39:13.982540  585602 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 20:39:13.982618  585602 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 20:39:13.982701  585602 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 20:39:13.982766  585602 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 20:39:13.982839  585602 kubeadm.go:310] 
	I1205 20:39:13.982855  585602 kubeadm.go:394] duration metric: took 7m58.414377536s to StartCluster
	I1205 20:39:13.982907  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:39:13.982975  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:39:14.031730  585602 cri.go:89] found id: ""
	I1205 20:39:14.031767  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.031779  585602 logs.go:284] No container was found matching "kube-apiserver"
	I1205 20:39:14.031791  585602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:39:14.031865  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:39:14.068372  585602 cri.go:89] found id: ""
	I1205 20:39:14.068420  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.068433  585602 logs.go:284] No container was found matching "etcd"
	I1205 20:39:14.068440  585602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:39:14.068512  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:39:14.106807  585602 cri.go:89] found id: ""
	I1205 20:39:14.106837  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.106847  585602 logs.go:284] No container was found matching "coredns"
	I1205 20:39:14.106856  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:39:14.106930  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:39:14.144926  585602 cri.go:89] found id: ""
	I1205 20:39:14.144952  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.144960  585602 logs.go:284] No container was found matching "kube-scheduler"
	I1205 20:39:14.144974  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:39:14.145052  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:39:14.182712  585602 cri.go:89] found id: ""
	I1205 20:39:14.182742  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.182754  585602 logs.go:284] No container was found matching "kube-proxy"
	I1205 20:39:14.182762  585602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:39:14.182826  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:39:14.220469  585602 cri.go:89] found id: ""
	I1205 20:39:14.220505  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.220519  585602 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 20:39:14.220527  585602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:39:14.220593  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:39:14.269791  585602 cri.go:89] found id: ""
	I1205 20:39:14.269823  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.269835  585602 logs.go:284] No container was found matching "kindnet"
	I1205 20:39:14.269842  585602 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 20:39:14.269911  585602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 20:39:14.313406  585602 cri.go:89] found id: ""
	I1205 20:39:14.313439  585602 logs.go:282] 0 containers: []
	W1205 20:39:14.313450  585602 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 20:39:14.313464  585602 logs.go:123] Gathering logs for dmesg ...
	I1205 20:39:14.313483  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:39:14.330488  585602 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:39:14.330526  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 20:39:14.417358  585602 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 20:39:14.417403  585602 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:39:14.417421  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:39:14.530226  585602 logs.go:123] Gathering logs for container status ...
	I1205 20:39:14.530270  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:39:14.585471  585602 logs.go:123] Gathering logs for kubelet ...
	I1205 20:39:14.585512  585602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 20:39:14.636389  585602 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1205 20:39:14.636456  585602 out.go:270] * 
	W1205 20:39:14.636535  585602 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 20:39:14.636549  585602 out.go:270] * 
	W1205 20:39:14.637475  585602 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 20:39:14.640654  585602 out.go:201] 
	W1205 20:39:14.641873  585602 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 20:39:14.641931  585602 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 20:39:14.641975  585602 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 20:39:14.643389  585602 out.go:201] 
	
	
	==> CRI-O <==
	Dec 05 20:49:45 old-k8s-version-386085 crio[629]: time="2024-12-05 20:49:45.622570008Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431785622544091,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c976b457-43a4-4abd-833e-28261a6502d4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:49:45 old-k8s-version-386085 crio[629]: time="2024-12-05 20:49:45.623262303Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d93a259a-f6aa-4d8d-abf0-09d520bab0e8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:49:45 old-k8s-version-386085 crio[629]: time="2024-12-05 20:49:45.623334242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d93a259a-f6aa-4d8d-abf0-09d520bab0e8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:49:45 old-k8s-version-386085 crio[629]: time="2024-12-05 20:49:45.623368297Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d93a259a-f6aa-4d8d-abf0-09d520bab0e8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:49:45 old-k8s-version-386085 crio[629]: time="2024-12-05 20:49:45.656783868Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=636ebd5d-366c-4b43-8f90-439a31b59140 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:49:45 old-k8s-version-386085 crio[629]: time="2024-12-05 20:49:45.656874252Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=636ebd5d-366c-4b43-8f90-439a31b59140 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:49:45 old-k8s-version-386085 crio[629]: time="2024-12-05 20:49:45.658048384Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b0567e16-4727-4a14-95fd-d3a1c6bb6fbb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:49:45 old-k8s-version-386085 crio[629]: time="2024-12-05 20:49:45.658471933Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431785658434770,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b0567e16-4727-4a14-95fd-d3a1c6bb6fbb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:49:45 old-k8s-version-386085 crio[629]: time="2024-12-05 20:49:45.659144853Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=64ecdebb-8d00-4e2d-9ea3-be6ea9a7f7e1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:49:45 old-k8s-version-386085 crio[629]: time="2024-12-05 20:49:45.659212397Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=64ecdebb-8d00-4e2d-9ea3-be6ea9a7f7e1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:49:45 old-k8s-version-386085 crio[629]: time="2024-12-05 20:49:45.659279059Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=64ecdebb-8d00-4e2d-9ea3-be6ea9a7f7e1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:49:45 old-k8s-version-386085 crio[629]: time="2024-12-05 20:49:45.692128004Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=92aaa91c-a890-4052-8ecc-e6d112072ba1 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:49:45 old-k8s-version-386085 crio[629]: time="2024-12-05 20:49:45.692232283Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=92aaa91c-a890-4052-8ecc-e6d112072ba1 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:49:45 old-k8s-version-386085 crio[629]: time="2024-12-05 20:49:45.693432848Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c9c0af2-dca5-4b9f-8254-2cc88c954b06 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:49:45 old-k8s-version-386085 crio[629]: time="2024-12-05 20:49:45.693868013Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431785693838888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c9c0af2-dca5-4b9f-8254-2cc88c954b06 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:49:45 old-k8s-version-386085 crio[629]: time="2024-12-05 20:49:45.694379677Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a03378f-4408-4891-b703-b6e0b65a1755 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:49:45 old-k8s-version-386085 crio[629]: time="2024-12-05 20:49:45.694455193Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a03378f-4408-4891-b703-b6e0b65a1755 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:49:45 old-k8s-version-386085 crio[629]: time="2024-12-05 20:49:45.694522423Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0a03378f-4408-4891-b703-b6e0b65a1755 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:49:45 old-k8s-version-386085 crio[629]: time="2024-12-05 20:49:45.728286127Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1e5082de-c99e-49ab-b434-b4bde574070f name=/runtime.v1.RuntimeService/Version
	Dec 05 20:49:45 old-k8s-version-386085 crio[629]: time="2024-12-05 20:49:45.728409634Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1e5082de-c99e-49ab-b434-b4bde574070f name=/runtime.v1.RuntimeService/Version
	Dec 05 20:49:45 old-k8s-version-386085 crio[629]: time="2024-12-05 20:49:45.729904103Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7f72b15c-1f3b-4ba6-bc3a-9bf4c4e57987 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:49:45 old-k8s-version-386085 crio[629]: time="2024-12-05 20:49:45.730383378Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431785730357992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7f72b15c-1f3b-4ba6-bc3a-9bf4c4e57987 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:49:45 old-k8s-version-386085 crio[629]: time="2024-12-05 20:49:45.731123671Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=51a8c5ae-8149-482e-9e6f-d0ed4cc62678 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:49:45 old-k8s-version-386085 crio[629]: time="2024-12-05 20:49:45.731192111Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=51a8c5ae-8149-482e-9e6f-d0ed4cc62678 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:49:45 old-k8s-version-386085 crio[629]: time="2024-12-05 20:49:45.731236840Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=51a8c5ae-8149-482e-9e6f-d0ed4cc62678 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 20:30] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053859] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.048232] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.156020] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.849389] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.680157] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec 5 20:31] systemd-fstab-generator[557]: Ignoring "noauto" option for root device
	[  +0.058081] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059601] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.177616] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.149980] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.257256] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +6.927159] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.062736] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.953352] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[  +9.534888] kauditd_printk_skb: 46 callbacks suppressed
	[Dec 5 20:35] systemd-fstab-generator[5061]: Ignoring "noauto" option for root device
	[Dec 5 20:37] systemd-fstab-generator[5344]: Ignoring "noauto" option for root device
	[  +0.073876] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:49:45 up 18 min,  0 users,  load average: 0.00, 0.01, 0.03
	Linux old-k8s-version-386085 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 05 20:49:42 old-k8s-version-386085 kubelet[6737]: created by net.cgoLookupIP
	Dec 05 20:49:42 old-k8s-version-386085 kubelet[6737]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Dec 05 20:49:42 old-k8s-version-386085 kubelet[6737]: goroutine 155 [runnable]:
	Dec 05 20:49:42 old-k8s-version-386085 kubelet[6737]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc000bc81c0)
	Dec 05 20:49:42 old-k8s-version-386085 kubelet[6737]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1242
	Dec 05 20:49:42 old-k8s-version-386085 kubelet[6737]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Dec 05 20:49:42 old-k8s-version-386085 kubelet[6737]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Dec 05 20:49:42 old-k8s-version-386085 kubelet[6737]: goroutine 156 [select]:
	Dec 05 20:49:42 old-k8s-version-386085 kubelet[6737]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000b0dbd0, 0xc000bf8101, 0xc000b7a700, 0xc000b86ec0, 0xc0003e5a80, 0xc0003e5a40)
	Dec 05 20:49:42 old-k8s-version-386085 kubelet[6737]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Dec 05 20:49:42 old-k8s-version-386085 kubelet[6737]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000bf81e0, 0x0, 0x0)
	Dec 05 20:49:42 old-k8s-version-386085 kubelet[6737]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Dec 05 20:49:42 old-k8s-version-386085 kubelet[6737]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000bc81c0)
	Dec 05 20:49:42 old-k8s-version-386085 kubelet[6737]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Dec 05 20:49:42 old-k8s-version-386085 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 20:49:42 old-k8s-version-386085 kubelet[6737]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Dec 05 20:49:42 old-k8s-version-386085 kubelet[6737]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Dec 05 20:49:43 old-k8s-version-386085 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 129.
	Dec 05 20:49:43 old-k8s-version-386085 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 05 20:49:43 old-k8s-version-386085 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 05 20:49:43 old-k8s-version-386085 kubelet[6746]: I1205 20:49:43.264335    6746 server.go:416] Version: v1.20.0
	Dec 05 20:49:43 old-k8s-version-386085 kubelet[6746]: I1205 20:49:43.264573    6746 server.go:837] Client rotation is on, will bootstrap in background
	Dec 05 20:49:43 old-k8s-version-386085 kubelet[6746]: I1205 20:49:43.266381    6746 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Dec 05 20:49:43 old-k8s-version-386085 kubelet[6746]: W1205 20:49:43.267193    6746 manager.go:159] Cannot detect current cgroup on cgroup v2
	Dec 05 20:49:43 old-k8s-version-386085 kubelet[6746]: I1205 20:49:43.267563    6746 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-386085 -n old-k8s-version-386085
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-386085 -n old-k8s-version-386085: exit status 2 (241.224957ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-386085" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (85.58s)

                                                
                                    

Test pass (239/311)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 25.37
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.15
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.2/json-events 16.04
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.07
18 TestDownloadOnly/v1.31.2/DeleteAll 0.15
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.62
22 TestOffline 114.16
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 199.82
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 10.54
35 TestAddons/parallel/Registry 17.85
37 TestAddons/parallel/InspektorGadget 10.85
40 TestAddons/parallel/CSI 58.27
41 TestAddons/parallel/Headlamp 20.01
42 TestAddons/parallel/CloudSpanner 5.9
43 TestAddons/parallel/LocalPath 56.4
44 TestAddons/parallel/NvidiaDevicePlugin 6.91
45 TestAddons/parallel/Yakd 12.43
48 TestCertOptions 122.49
49 TestCertExpiration 296.59
51 TestForceSystemdFlag 46.14
52 TestForceSystemdEnv 84.54
54 TestKVMDriverInstallOrUpdate 7.68
58 TestErrorSpam/setup 44.6
59 TestErrorSpam/start 0.38
60 TestErrorSpam/status 0.78
61 TestErrorSpam/pause 1.66
62 TestErrorSpam/unpause 1.85
63 TestErrorSpam/stop 5.42
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 82.35
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 33.19
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.48
75 TestFunctional/serial/CacheCmd/cache/add_local 2.23
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.75
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 35.13
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.45
86 TestFunctional/serial/LogsFileCmd 1.48
87 TestFunctional/serial/InvalidService 4.34
89 TestFunctional/parallel/ConfigCmd 0.38
90 TestFunctional/parallel/DashboardCmd 20.6
91 TestFunctional/parallel/DryRun 0.32
92 TestFunctional/parallel/InternationalLanguage 0.18
93 TestFunctional/parallel/StatusCmd 1.12
97 TestFunctional/parallel/ServiceCmdConnect 7.78
98 TestFunctional/parallel/AddonsCmd 0.16
99 TestFunctional/parallel/PersistentVolumeClaim 43.14
101 TestFunctional/parallel/SSHCmd 0.49
102 TestFunctional/parallel/CpCmd 1.34
103 TestFunctional/parallel/MySQL 26.87
104 TestFunctional/parallel/FileSync 0.28
105 TestFunctional/parallel/CertSync 1.86
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.52
113 TestFunctional/parallel/License 0.63
114 TestFunctional/parallel/ServiceCmd/DeployApp 13.24
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
116 TestFunctional/parallel/MountCmd/any-port 11.84
117 TestFunctional/parallel/ProfileCmd/profile_list 0.37
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
119 TestFunctional/parallel/Version/short 0.06
120 TestFunctional/parallel/Version/components 1.03
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
124 TestFunctional/parallel/MountCmd/specific-port 1.72
125 TestFunctional/parallel/ServiceCmd/List 0.45
126 TestFunctional/parallel/ServiceCmd/JSONOutput 0.44
127 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
128 TestFunctional/parallel/MountCmd/VerifyCleanup 1.28
129 TestFunctional/parallel/ServiceCmd/Format 0.31
130 TestFunctional/parallel/ServiceCmd/URL 0.3
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.55
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.41
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.44
144 TestFunctional/parallel/ImageCommands/ImageBuild 12.18
145 TestFunctional/parallel/ImageCommands/Setup 1.87
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.02
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.88
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.23
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 3.51
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.6
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.96
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.58
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 206.52
160 TestMultiControlPlane/serial/DeployApp 7.52
161 TestMultiControlPlane/serial/PingHostFromPods 1.27
162 TestMultiControlPlane/serial/AddWorkerNode 57.29
163 TestMultiControlPlane/serial/NodeLabels 0.07
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.92
165 TestMultiControlPlane/serial/CopyFile 13.55
171 TestMultiControlPlane/serial/DeleteSecondaryNode 16.95
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.64
178 TestJSONOutput/start/Command 87.56
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.73
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.67
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 7.38
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.21
206 TestMainNoArgs 0.05
207 TestMinikubeProfile 89.65
210 TestMountStart/serial/StartWithMountFirst 31.45
211 TestMountStart/serial/VerifyMountFirst 0.38
212 TestMountStart/serial/StartWithMountSecond 28.35
213 TestMountStart/serial/VerifyMountSecond 0.4
214 TestMountStart/serial/DeleteFirst 0.89
215 TestMountStart/serial/VerifyMountPostDelete 0.39
216 TestMountStart/serial/Stop 1.34
217 TestMountStart/serial/RestartStopped 23.12
218 TestMountStart/serial/VerifyMountPostStop 0.39
221 TestMultiNode/serial/FreshStart2Nodes 114.98
222 TestMultiNode/serial/DeployApp2Nodes 7.27
223 TestMultiNode/serial/PingHostFrom2Pods 0.85
224 TestMultiNode/serial/AddNode 50.93
225 TestMultiNode/serial/MultiNodeLabels 0.06
226 TestMultiNode/serial/ProfileList 0.6
227 TestMultiNode/serial/CopyFile 7.61
228 TestMultiNode/serial/StopNode 2.42
229 TestMultiNode/serial/StartAfterStop 40.95
231 TestMultiNode/serial/DeleteNode 2.31
233 TestMultiNode/serial/RestartMultiNode 178.69
234 TestMultiNode/serial/ValidateNameConflict 44.05
241 TestScheduledStopUnix 118.73
245 TestRunningBinaryUpgrade 233.52
249 TestStoppedBinaryUpgrade/Setup 2.6
250 TestStoppedBinaryUpgrade/Upgrade 148.33
259 TestPause/serial/Start 63.5
260 TestStoppedBinaryUpgrade/MinikubeLogs 0.96
262 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
263 TestNoKubernetes/serial/StartWithK8s 49.88
265 TestNoKubernetes/serial/StartWithStopK8s 5.71
266 TestNoKubernetes/serial/Start 27.81
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
271 TestNoKubernetes/serial/ProfileList 1.34
276 TestNetworkPlugins/group/false 3.75
277 TestNoKubernetes/serial/Stop 1.45
278 TestNoKubernetes/serial/StartNoArgs 45.74
282 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
286 TestStartStop/group/no-preload/serial/FirstStart 118.92
288 TestStartStop/group/embed-certs/serial/FirstStart 89.33
289 TestStartStop/group/no-preload/serial/DeployApp 11.28
290 TestStartStop/group/embed-certs/serial/DeployApp 10.28
291 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.05
293 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.13
296 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 87.4
299 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.28
300 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.01
304 TestStartStop/group/no-preload/serial/SecondStart 689.09
305 TestStartStop/group/embed-certs/serial/SecondStart 607.88
306 TestStartStop/group/old-k8s-version/serial/Stop 3.43
307 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
310 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 519.38
320 TestStartStop/group/newest-cni/serial/FirstStart 48.68
321 TestStartStop/group/newest-cni/serial/DeployApp 0
322 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.15
323 TestStartStop/group/newest-cni/serial/Stop 10.6
324 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
325 TestStartStop/group/newest-cni/serial/SecondStart 41.91
326 TestNetworkPlugins/group/auto/Start 98.37
327 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
328 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
329 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
330 TestStartStop/group/newest-cni/serial/Pause 2.56
331 TestNetworkPlugins/group/enable-default-cni/Start 57.78
332 TestNetworkPlugins/group/flannel/Start 98.53
333 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
334 TestNetworkPlugins/group/enable-default-cni/NetCatPod 14.29
335 TestNetworkPlugins/group/auto/KubeletFlags 0.23
336 TestNetworkPlugins/group/auto/NetCatPod 11.28
337 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
338 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
339 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
340 TestNetworkPlugins/group/auto/DNS 0.17
341 TestNetworkPlugins/group/auto/Localhost 0.18
342 TestNetworkPlugins/group/auto/HairPin 0.14
343 TestNetworkPlugins/group/bridge/Start 62.09
344 TestNetworkPlugins/group/calico/Start 108.63
345 TestNetworkPlugins/group/flannel/ControllerPod 6.01
346 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
347 TestNetworkPlugins/group/flannel/NetCatPod 10.22
348 TestNetworkPlugins/group/flannel/DNS 0.2
349 TestNetworkPlugins/group/flannel/Localhost 0.16
350 TestNetworkPlugins/group/flannel/HairPin 0.19
351 TestNetworkPlugins/group/custom-flannel/Start 84.86
352 TestNetworkPlugins/group/kindnet/Start 106.32
353 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
354 TestNetworkPlugins/group/bridge/NetCatPod 13.33
355 TestNetworkPlugins/group/bridge/DNS 10.16
356 TestNetworkPlugins/group/bridge/Localhost 0.13
357 TestNetworkPlugins/group/bridge/HairPin 0.15
358 TestNetworkPlugins/group/calico/ControllerPod 6.01
359 TestNetworkPlugins/group/calico/KubeletFlags 0.31
360 TestNetworkPlugins/group/calico/NetCatPod 11.23
361 TestNetworkPlugins/group/calico/DNS 0.17
362 TestNetworkPlugins/group/calico/Localhost 0.13
363 TestNetworkPlugins/group/calico/HairPin 0.13
364 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
365 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.24
366 TestNetworkPlugins/group/custom-flannel/DNS 0.18
367 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
368 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
369 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
370 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
371 TestNetworkPlugins/group/kindnet/NetCatPod 11.22
372 TestNetworkPlugins/group/kindnet/DNS 0.15
373 TestNetworkPlugins/group/kindnet/Localhost 0.13
374 TestNetworkPlugins/group/kindnet/HairPin 0.13
x
+
TestDownloadOnly/v1.20.0/json-events (25.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-196484 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-196484 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (25.369899185s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (25.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1205 19:02:13.205685  538186 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1205 19:02:13.205812  538186 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-196484
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-196484: exit status 85 (70.461914ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-196484 | jenkins | v1.34.0 | 05 Dec 24 19:01 UTC |          |
	|         | -p download-only-196484        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 19:01:47
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:01:47.880385  538198 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:01:47.880625  538198 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:01:47.880634  538198 out.go:358] Setting ErrFile to fd 2...
	I1205 19:01:47.880639  538198 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:01:47.880834  538198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	W1205 19:01:47.880967  538198 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20052-530897/.minikube/config/config.json: open /home/jenkins/minikube-integration/20052-530897/.minikube/config/config.json: no such file or directory
	I1205 19:01:47.881516  538198 out.go:352] Setting JSON to true
	I1205 19:01:47.882585  538198 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":6254,"bootTime":1733419054,"procs":284,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:01:47.882704  538198 start.go:139] virtualization: kvm guest
	I1205 19:01:47.885139  538198 out.go:97] [download-only-196484] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1205 19:01:47.885252  538198 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball: no such file or directory
	I1205 19:01:47.885332  538198 notify.go:220] Checking for updates...
	I1205 19:01:47.886463  538198 out.go:169] MINIKUBE_LOCATION=20052
	I1205 19:01:47.887775  538198 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:01:47.889127  538198 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:01:47.890334  538198 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:01:47.891483  538198 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1205 19:01:47.894072  538198 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 19:01:47.894296  538198 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 19:01:47.926645  538198 out.go:97] Using the kvm2 driver based on user configuration
	I1205 19:01:47.926672  538198 start.go:297] selected driver: kvm2
	I1205 19:01:47.926681  538198 start.go:901] validating driver "kvm2" against <nil>
	I1205 19:01:47.927045  538198 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:01:47.927129  538198 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20052-530897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 19:01:47.942544  538198 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 19:01:47.942607  538198 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 19:01:47.943177  538198 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1205 19:01:47.943348  538198 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 19:01:47.943385  538198 cni.go:84] Creating CNI manager for ""
	I1205 19:01:47.943450  538198 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 19:01:47.943462  538198 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 19:01:47.943526  538198 start.go:340] cluster config:
	{Name:download-only-196484 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-196484 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:01:47.943731  538198 iso.go:125] acquiring lock: {Name:mk778929df466edaca8cb6d38427acedfae32b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:01:47.945524  538198 out.go:97] Downloading VM boot image ...
	I1205 19:01:47.945562  538198 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20052-530897/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 19:01:58.425202  538198 out.go:97] Starting "download-only-196484" primary control-plane node in "download-only-196484" cluster
	I1205 19:01:58.425237  538198 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 19:01:58.534082  538198 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1205 19:01:58.534126  538198 cache.go:56] Caching tarball of preloaded images
	I1205 19:01:58.534301  538198 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 19:01:58.536388  538198 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1205 19:01:58.536406  538198 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1205 19:01:58.651563  538198 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1205 19:02:11.184717  538198 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1205 19:02:11.184817  538198 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1205 19:02:12.106307  538198 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1205 19:02:12.106645  538198 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/download-only-196484/config.json ...
	I1205 19:02:12.106677  538198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/download-only-196484/config.json: {Name:mk898860f4698bfc9323290be680f4ce2e21391f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:02:12.106844  538198 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 19:02:12.107025  538198 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-196484 host does not exist
	  To start a cluster, run: "minikube start -p download-only-196484"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-196484
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (16.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-765744 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-765744 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (16.042070614s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (16.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1205 19:02:29.604279  538186 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1205 19:02:29.604338  538186 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-765744
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-765744: exit status 85 (68.215657ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-196484 | jenkins | v1.34.0 | 05 Dec 24 19:01 UTC |                     |
	|         | -p download-only-196484        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 05 Dec 24 19:02 UTC | 05 Dec 24 19:02 UTC |
	| delete  | -p download-only-196484        | download-only-196484 | jenkins | v1.34.0 | 05 Dec 24 19:02 UTC | 05 Dec 24 19:02 UTC |
	| start   | -o=json --download-only        | download-only-765744 | jenkins | v1.34.0 | 05 Dec 24 19:02 UTC |                     |
	|         | -p download-only-765744        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 19:02:13
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:02:13.607847  538451 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:02:13.608089  538451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:02:13.608098  538451 out.go:358] Setting ErrFile to fd 2...
	I1205 19:02:13.608102  538451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:02:13.608312  538451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 19:02:13.608888  538451 out.go:352] Setting JSON to true
	I1205 19:02:13.609901  538451 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":6280,"bootTime":1733419054,"procs":262,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:02:13.610016  538451 start.go:139] virtualization: kvm guest
	I1205 19:02:13.612593  538451 out.go:97] [download-only-765744] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:02:13.612784  538451 notify.go:220] Checking for updates...
	I1205 19:02:13.614575  538451 out.go:169] MINIKUBE_LOCATION=20052
	I1205 19:02:13.616188  538451 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:02:13.617889  538451 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:02:13.619458  538451 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:02:13.620872  538451 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1205 19:02:13.623963  538451 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 19:02:13.624246  538451 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 19:02:13.656822  538451 out.go:97] Using the kvm2 driver based on user configuration
	I1205 19:02:13.656865  538451 start.go:297] selected driver: kvm2
	I1205 19:02:13.656873  538451 start.go:901] validating driver "kvm2" against <nil>
	I1205 19:02:13.657367  538451 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:02:13.657479  538451 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20052-530897/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 19:02:13.673463  538451 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 19:02:13.673544  538451 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 19:02:13.674155  538451 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1205 19:02:13.674315  538451 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 19:02:13.674351  538451 cni.go:84] Creating CNI manager for ""
	I1205 19:02:13.674402  538451 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 19:02:13.674411  538451 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 19:02:13.674460  538451 start.go:340] cluster config:
	{Name:download-only-765744 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-765744 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:02:13.674564  538451 iso.go:125] acquiring lock: {Name:mk778929df466edaca8cb6d38427acedfae32b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:02:13.676565  538451 out.go:97] Starting "download-only-765744" primary control-plane node in "download-only-765744" cluster
	I1205 19:02:13.676582  538451 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:02:14.232361  538451 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 19:02:14.232407  538451 cache.go:56] Caching tarball of preloaded images
	I1205 19:02:14.232585  538451 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:02:14.234610  538451 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1205 19:02:14.234647  538451 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1205 19:02:14.339304  538451 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:fc069bc1785feafa8477333f3a79092d -> /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 19:02:27.523215  538451 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1205 19:02:27.523324  538451 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20052-530897/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1205 19:02:28.279382  538451 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 19:02:28.279758  538451 profile.go:143] Saving config to /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/download-only-765744/config.json ...
	I1205 19:02:28.279791  538451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/download-only-765744/config.json: {Name:mk1ae7ef36f696fa3af3d6aafa68865189180a93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:02:28.279957  538451 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 19:02:28.280111  538451 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20052-530897/.minikube/cache/linux/amd64/v1.31.2/kubectl
	
	
	* The control-plane node download-only-765744 host does not exist
	  To start a cluster, run: "minikube start -p download-only-765744"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-765744
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I1205 19:02:30.235712  538186 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-199569 --alsologtostderr --binary-mirror http://127.0.0.1:46195 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-199569" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-199569
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestOffline (114.16s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-974924 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-974924 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m53.123445903s)
helpers_test.go:175: Cleaning up "offline-crio-974924" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-974924
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-974924: (1.041149493s)
--- PASS: TestOffline (114.16s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-396564
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-396564: exit status 85 (58.45898ms)

                                                
                                                
-- stdout --
	* Profile "addons-396564" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-396564"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-396564
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-396564: exit status 85 (59.293035ms)

                                                
                                                
-- stdout --
	* Profile "addons-396564" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-396564"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (199.82s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-396564 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-396564 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m19.823831664s)
--- PASS: TestAddons/Setup (199.82s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-396564 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-396564 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.54s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-396564 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-396564 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0ab9fb43-6d1a-4c93-b7a8-53945e058344] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0ab9fb43-6d1a-4c93-b7a8-53945e058344] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.006078417s
addons_test.go:633: (dbg) Run:  kubectl --context addons-396564 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-396564 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-396564 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.54s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.069821ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-ljr8x" [0b9f7adc-96cd-4c61-aab5-70400f03a848] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002653862s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-jzvwd" [7d2f7d65-082f-42f9-a2e0-4329066b06c6] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004472554s
addons_test.go:331: (dbg) Run:  kubectl --context addons-396564 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-396564 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-396564 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.861342392s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-396564 ip
2024/12/05 19:06:27 [DEBUG] GET http://192.168.39.9:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-396564 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.85s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.85s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-pn2gd" [4368d656-897b-4bcf-86f8-ed2092cdc99f] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005032092s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-396564 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-396564 addons disable inspektor-gadget --alsologtostderr -v=1: (5.842836651s)
--- PASS: TestAddons/parallel/InspektorGadget (10.85s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.27s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1205 19:06:28.461969  538186 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1205 19:06:28.467270  538186 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1205 19:06:28.467300  538186 kapi.go:107] duration metric: took 5.367347ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 5.378975ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-396564 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396564 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396564 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396564 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396564 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396564 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396564 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396564 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396564 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396564 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396564 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396564 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396564 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396564 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396564 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-396564 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [5a08cd6e-ce38-44c9-b313-9c9281dc5ecb] Pending
helpers_test.go:344: "task-pv-pod" [5a08cd6e-ce38-44c9-b313-9c9281dc5ecb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [5a08cd6e-ce38-44c9-b313-9c9281dc5ecb] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 18.004987874s
addons_test.go:511: (dbg) Run:  kubectl --context addons-396564 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-396564 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-396564 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-396564 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-396564 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-396564 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396564 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396564 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396564 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396564 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396564 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396564 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396564 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396564 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-396564 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [60b35f80-a8db-470b-9a72-02fa40c95cdc] Pending
helpers_test.go:344: "task-pv-pod-restore" [60b35f80-a8db-470b-9a72-02fa40c95cdc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [60b35f80-a8db-470b-9a72-02fa40c95cdc] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.004206979s
addons_test.go:553: (dbg) Run:  kubectl --context addons-396564 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-396564 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-396564 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-396564 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-396564 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-396564 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.861084331s)
--- PASS: TestAddons/parallel/CSI (58.27s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-396564 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-cd8ffd6fc-vfzht" [3f626b42-c554-4959-a39a-6e4d9a4f6fc5] Pending
helpers_test.go:344: "headlamp-cd8ffd6fc-vfzht" [3f626b42-c554-4959-a39a-6e4d9a4f6fc5] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-vfzht" [3f626b42-c554-4959-a39a-6e4d9a4f6fc5] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004825243s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-396564 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-396564 addons disable headlamp --alsologtostderr -v=1: (6.101827388s)
--- PASS: TestAddons/parallel/Headlamp (20.01s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.9s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-wjwcr" [5205d364-82a0-492c-b693-c4d4627dc80a] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005743875s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-396564 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.90s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.4s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-396564 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-396564 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396564 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396564 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396564 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396564 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396564 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396564 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-396564 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [4fc353e0-7d7c-4dab-923c-35dee084b72d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [4fc353e0-7d7c-4dab-923c-35dee084b72d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [4fc353e0-7d7c-4dab-923c-35dee084b72d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.003771421s
addons_test.go:906: (dbg) Run:  kubectl --context addons-396564 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-396564 ssh "cat /opt/local-path-provisioner/pvc-41b3db4e-7b14-4edb-9a67-ba393129c596_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-396564 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-396564 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-396564 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-396564 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.498055626s)
--- PASS: TestAddons/parallel/LocalPath (56.40s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.91s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-pngv4" [53fc8bbc-5529-4aaf-81c2-c11c9b882577] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004419999s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-396564 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.91s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-rf86s" [5b9dccbb-b685-491e-a738-571dc8c82879] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004437603s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-396564 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-396564 addons disable yakd --alsologtostderr -v=1: (6.426708353s)
--- PASS: TestAddons/parallel/Yakd (12.43s)

                                                
                                    
x
+
TestCertOptions (122.49s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-790679 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-790679 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (2m0.971840629s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-790679 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-790679 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-790679 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-790679" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-790679
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-790679: (1.029511283s)
--- PASS: TestCertOptions (122.49s)

                                                
                                    
x
+
TestCertExpiration (296.59s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-315387 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-315387 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m36.756036886s)
E1205 20:20:51.381731  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-315387 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-315387 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (18.806313983s)
helpers_test.go:175: Cleaning up "cert-expiration-315387" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-315387
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-315387: (1.027263757s)
--- PASS: TestCertExpiration (296.59s)

                                                
                                    
x
+
TestForceSystemdFlag (46.14s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-130544 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-130544 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (44.936856466s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-130544 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-130544" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-130544
--- PASS: TestForceSystemdFlag (46.14s)

                                                
                                    
x
+
TestForceSystemdEnv (84.54s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-801098 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-801098 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m23.681056184s)
helpers_test.go:175: Cleaning up "force-systemd-env-801098" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-801098
--- PASS: TestForceSystemdEnv (84.54s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (7.68s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1205 20:16:08.704161  538186 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 20:16:08.704369  538186 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1205 20:16:08.734925  538186 install.go:62] docker-machine-driver-kvm2: exit status 1
W1205 20:16:08.735333  538186 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1205 20:16:08.735422  538186 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2196178886/001/docker-machine-driver-kvm2
I1205 20:16:09.350722  538186 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2196178886/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5316020 0x5316020 0x5316020 0x5316020 0x5316020 0x5316020 0x5316020] Decompressors:map[bz2:0xc0007200b0 gz:0xc0007200b8 tar:0xc000720040 tar.bz2:0xc000720050 tar.gz:0xc000720060 tar.xz:0xc000720090 tar.zst:0xc0007200a0 tbz2:0xc000720050 tgz:0xc000720060 txz:0xc000720090 tzst:0xc0007200a0 xz:0xc0007200c0 zip:0xc0007200d0 zst:0xc0007200c8] Getters:map[file:0xc001ce4e60 http:0xc00071eeb0 https:0xc00071ef00] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1205 20:16:09.350842  538186 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2196178886/001/docker-machine-driver-kvm2
I1205 20:16:12.975974  538186 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 20:16:12.976091  538186 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1205 20:16:13.011634  538186 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1205 20:16:13.011670  538186 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1205 20:16:13.011754  538186 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1205 20:16:13.011793  538186 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2196178886/002/docker-machine-driver-kvm2
I1205 20:16:13.516764  538186 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2196178886/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5316020 0x5316020 0x5316020 0x5316020 0x5316020 0x5316020 0x5316020] Decompressors:map[bz2:0xc0007200b0 gz:0xc0007200b8 tar:0xc000720040 tar.bz2:0xc000720050 tar.gz:0xc000720060 tar.xz:0xc000720090 tar.zst:0xc0007200a0 tbz2:0xc000720050 tgz:0xc000720060 txz:0xc000720090 tzst:0xc0007200a0 xz:0xc0007200c0 zip:0xc0007200d0 zst:0xc0007200c8] Getters:map[file:0xc002022260 http:0xc000d062d0 https:0xc000d06320] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1205 20:16:13.516821  538186 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2196178886/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (7.68s)

                                                
                                    
x
+
TestErrorSpam/setup (44.6s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-933722 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-933722 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-933722 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-933722 --driver=kvm2  --container-runtime=crio: (44.604462411s)
--- PASS: TestErrorSpam/setup (44.60s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933722 --log_dir /tmp/nospam-933722 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933722 --log_dir /tmp/nospam-933722 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933722 --log_dir /tmp/nospam-933722 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933722 --log_dir /tmp/nospam-933722 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933722 --log_dir /tmp/nospam-933722 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933722 --log_dir /tmp/nospam-933722 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.66s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933722 --log_dir /tmp/nospam-933722 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933722 --log_dir /tmp/nospam-933722 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933722 --log_dir /tmp/nospam-933722 pause
--- PASS: TestErrorSpam/pause (1.66s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.85s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933722 --log_dir /tmp/nospam-933722 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933722 --log_dir /tmp/nospam-933722 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933722 --log_dir /tmp/nospam-933722 unpause
--- PASS: TestErrorSpam/unpause (1.85s)

                                                
                                    
x
+
TestErrorSpam/stop (5.42s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933722 --log_dir /tmp/nospam-933722 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-933722 --log_dir /tmp/nospam-933722 stop: (2.356569491s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933722 --log_dir /tmp/nospam-933722 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-933722 --log_dir /tmp/nospam-933722 stop: (2.019173705s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933722 --log_dir /tmp/nospam-933722 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-933722 --log_dir /tmp/nospam-933722 stop: (1.038783714s)
--- PASS: TestErrorSpam/stop (5.42s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20052-530897/.minikube/files/etc/test/nested/copy/538186/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (82.35s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-583983 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1205 19:15:51.382592  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:15:51.389035  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:15:51.400366  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:15:51.421741  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:15:51.463184  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:15:51.544638  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:15:51.706152  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:15:52.027828  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:15:52.669519  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:15:53.951964  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:15:56.515327  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:16:01.637523  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:16:11.879287  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:16:32.361581  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-583983 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m22.353250336s)
--- PASS: TestFunctional/serial/StartWithProxy (82.35s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.19s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1205 19:16:50.856880  538186 config.go:182] Loaded profile config "functional-583983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-583983 --alsologtostderr -v=8
E1205 19:17:13.322946  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-583983 --alsologtostderr -v=8: (33.185283479s)
functional_test.go:663: soft start took 33.185990792s for "functional-583983" cluster.
I1205 19:17:24.042522  538186 config.go:182] Loaded profile config "functional-583983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (33.19s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-583983 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-583983 cache add registry.k8s.io/pause:3.1: (1.114974143s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-583983 cache add registry.k8s.io/pause:3.3: (1.188347478s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-583983 cache add registry.k8s.io/pause:latest: (1.175091866s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-583983 /tmp/TestFunctionalserialCacheCmdcacheadd_local1127779401/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 cache add minikube-local-cache-test:functional-583983
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-583983 cache add minikube-local-cache-test:functional-583983: (1.897119306s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 cache delete minikube-local-cache-test:functional-583983
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-583983
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-583983 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (228.138925ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-583983 cache reload: (1.026461661s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 kubectl -- --context functional-583983 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-583983 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.13s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-583983 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-583983 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.129043597s)
functional_test.go:761: restart took 35.129173716s for "functional-583983" cluster.
I1205 19:18:07.444756  538186 config.go:182] Loaded profile config "functional-583983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (35.13s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-583983 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-583983 logs: (1.44525016s)
--- PASS: TestFunctional/serial/LogsCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 logs --file /tmp/TestFunctionalserialLogsFileCmd1619203187/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-583983 logs --file /tmp/TestFunctionalserialLogsFileCmd1619203187/001/logs.txt: (1.474476971s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.34s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-583983 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-583983
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-583983: exit status 115 (289.014492ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.49:31047 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-583983 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.34s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-583983 config get cpus: exit status 14 (66.716974ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-583983 config get cpus: exit status 14 (54.000632ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (20.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-583983 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-583983 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 547247: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (20.60s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-583983 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-583983 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (162.429512ms)

                                                
                                                
-- stdout --
	* [functional-583983] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20052
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 19:18:16.542460  546812 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:18:16.542612  546812 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:18:16.542622  546812 out.go:358] Setting ErrFile to fd 2...
	I1205 19:18:16.542627  546812 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:18:16.543229  546812 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 19:18:16.543951  546812 out.go:352] Setting JSON to false
	I1205 19:18:16.546407  546812 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":7243,"bootTime":1733419054,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:18:16.546521  546812 start.go:139] virtualization: kvm guest
	I1205 19:18:16.548132  546812 out.go:177] * [functional-583983] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:18:16.550080  546812 notify.go:220] Checking for updates...
	I1205 19:18:16.550109  546812 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 19:18:16.551387  546812 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:18:16.553089  546812 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:18:16.554467  546812 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:18:16.555783  546812 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:18:16.557125  546812 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:18:16.559045  546812 config.go:182] Loaded profile config "functional-583983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:18:16.559607  546812 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:18:16.559671  546812 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:18:16.576713  546812 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43433
	I1205 19:18:16.577548  546812 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:18:16.578194  546812 main.go:141] libmachine: Using API Version  1
	I1205 19:18:16.578211  546812 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:18:16.578672  546812 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:18:16.578881  546812 main.go:141] libmachine: (functional-583983) Calling .DriverName
	I1205 19:18:16.579155  546812 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 19:18:16.579436  546812 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:18:16.579462  546812 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:18:16.604312  546812 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35645
	I1205 19:18:16.604745  546812 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:18:16.605226  546812 main.go:141] libmachine: Using API Version  1
	I1205 19:18:16.605247  546812 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:18:16.605631  546812 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:18:16.605805  546812 main.go:141] libmachine: (functional-583983) Calling .DriverName
	I1205 19:18:16.640395  546812 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 19:18:16.641768  546812 start.go:297] selected driver: kvm2
	I1205 19:18:16.641789  546812 start.go:901] validating driver "kvm2" against &{Name:functional-583983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-583983 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:18:16.641961  546812 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:18:16.644357  546812 out.go:201] 
	W1205 19:18:16.645768  546812 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1205 19:18:16.647118  546812 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-583983 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-583983 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-583983 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (179.624821ms)

                                                
                                                
-- stdout --
	* [functional-583983] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20052
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 19:18:16.372164  546740 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:18:16.372339  546740 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:18:16.372354  546740 out.go:358] Setting ErrFile to fd 2...
	I1205 19:18:16.372371  546740 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:18:16.372699  546740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 19:18:16.373254  546740 out.go:352] Setting JSON to false
	I1205 19:18:16.374312  546740 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":7242,"bootTime":1733419054,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:18:16.374429  546740 start.go:139] virtualization: kvm guest
	I1205 19:18:16.376970  546740 out.go:177] * [functional-583983] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1205 19:18:16.378699  546740 notify.go:220] Checking for updates...
	I1205 19:18:16.379354  546740 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 19:18:16.381168  546740 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:18:16.382841  546740 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 19:18:16.384339  546740 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 19:18:16.386088  546740 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:18:16.387771  546740 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:18:16.389813  546740 config.go:182] Loaded profile config "functional-583983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:18:16.390483  546740 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:18:16.390556  546740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:18:16.416722  546740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42705
	I1205 19:18:16.417377  546740 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:18:16.418079  546740 main.go:141] libmachine: Using API Version  1
	I1205 19:18:16.418095  546740 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:18:16.418478  546740 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:18:16.418700  546740 main.go:141] libmachine: (functional-583983) Calling .DriverName
	I1205 19:18:16.418995  546740 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 19:18:16.419307  546740 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:18:16.419348  546740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:18:16.437339  546740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45615
	I1205 19:18:16.437803  546740 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:18:16.438503  546740 main.go:141] libmachine: Using API Version  1
	I1205 19:18:16.438558  546740 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:18:16.439019  546740 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:18:16.439261  546740 main.go:141] libmachine: (functional-583983) Calling .DriverName
	I1205 19:18:16.476207  546740 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1205 19:18:16.477674  546740 start.go:297] selected driver: kvm2
	I1205 19:18:16.477693  546740 start.go:901] validating driver "kvm2" against &{Name:functional-583983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-583983 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 19:18:16.477855  546740 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:18:16.480519  546740 out.go:201] 
	W1205 19:18:16.482119  546740 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1205 19:18:16.483555  546740 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-583983 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-583983 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-6df2j" [97bc4f3e-511a-4cf0-bb00-6f9714cbc087] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-6df2j" [97bc4f3e-511a-4cf0-bb00-6f9714cbc087] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004814375s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.49:30420
functional_test.go:1675: http://192.168.39.49:30420: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-6df2j

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.49:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.49:30420
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.78s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (43.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [c415a9d6-8d78-4f8d-94a6-963e03d0e59c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00446255s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-583983 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-583983 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-583983 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-583983 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cad86569-7886-43db-be6c-03cda4526f22] Pending
helpers_test.go:344: "sp-pod" [cad86569-7886-43db-be6c-03cda4526f22] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [cad86569-7886-43db-be6c-03cda4526f22] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.00415102s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-583983 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-583983 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-583983 delete -f testdata/storage-provisioner/pod.yaml: (2.30459738s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-583983 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [39da67cf-1e84-413f-ab59-3d83b5e7824a] Pending
helpers_test.go:344: "sp-pod" [39da67cf-1e84-413f-ab59-3d83b5e7824a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [39da67cf-1e84-413f-ab59-3d83b5e7824a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003966131s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-583983 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (43.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh -n functional-583983 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 cp functional-583983:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd554787630/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh -n functional-583983 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh -n functional-583983 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-583983 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-v98l9" [50a173cf-a261-4c1d-9b24-5726fa4cfa4c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-v98l9" [50a173cf-a261-4c1d-9b24-5726fa4cfa4c] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.00440572s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-583983 exec mysql-6cdb49bbb-v98l9 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-583983 exec mysql-6cdb49bbb-v98l9 -- mysql -ppassword -e "show databases;": exit status 1 (121.518456ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1205 19:19:03.008771  538186 retry.go:31] will retry after 1.421554949s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-583983 exec mysql-6cdb49bbb-v98l9 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.87s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/538186/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh "sudo cat /etc/test/nested/copy/538186/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/538186.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh "sudo cat /etc/ssl/certs/538186.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/538186.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh "sudo cat /usr/share/ca-certificates/538186.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/5381862.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh "sudo cat /etc/ssl/certs/5381862.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/5381862.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh "sudo cat /usr/share/ca-certificates/5381862.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-583983 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-583983 ssh "sudo systemctl is-active docker": exit status 1 (267.60281ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-583983 ssh "sudo systemctl is-active containerd": exit status 1 (256.474979ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-583983 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-583983 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-rjgrj" [37152ab0-df7d-40ef-9e6d-fb1a7f6d503b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-rjgrj" [37152ab0-df7d-40ef-9e6d-fb1a7f6d503b] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.005901996s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-583983 /tmp/TestFunctionalparallelMountCmdany-port624833357/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1733426295413085398" to /tmp/TestFunctionalparallelMountCmdany-port624833357/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1733426295413085398" to /tmp/TestFunctionalparallelMountCmdany-port624833357/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1733426295413085398" to /tmp/TestFunctionalparallelMountCmdany-port624833357/001/test-1733426295413085398
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-583983 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (216.431908ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 19:18:15.629820  538186 retry.go:31] will retry after 649.310498ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  5 19:18 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  5 19:18 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  5 19:18 test-1733426295413085398
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh cat /mount-9p/test-1733426295413085398
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-583983 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [284c51ad-d646-4d9f-be2c-d38ceeaa0445] Pending
helpers_test.go:344: "busybox-mount" [284c51ad-d646-4d9f-be2c-d38ceeaa0445] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [284c51ad-d646-4d9f-be2c-d38ceeaa0445] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [284c51ad-d646-4d9f-be2c-d38ceeaa0445] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.005543494s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-583983 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-583983 /tmp/TestFunctionalparallelMountCmdany-port624833357/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.84s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "301.213621ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "64.867083ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "340.608871ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "60.093728ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-amd64 -p functional-583983 version -o=json --components: (1.026193722s)
--- PASS: TestFunctional/parallel/Version/components (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-583983 /tmp/TestFunctionalparallelMountCmdspecific-port2134769433/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-583983 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (245.39012ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 19:18:27.495640  538186 retry.go:31] will retry after 428.413854ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-583983 /tmp/TestFunctionalparallelMountCmdspecific-port2134769433/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-583983 ssh "sudo umount -f /mount-9p": exit status 1 (208.294191ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-583983 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-583983 /tmp/TestFunctionalparallelMountCmdspecific-port2134769433/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 service list -o json
functional_test.go:1494: Took "442.068012ms" to run "out/minikube-linux-amd64 -p functional-583983 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.49:31568
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-583983 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1801747072/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-583983 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1801747072/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-583983 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1801747072/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-583983 ssh "findmnt -T" /mount1: exit status 1 (266.049864ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 19:18:29.236073  538186 retry.go:31] will retry after 309.858388ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-583983 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-583983 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1801747072/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-583983 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1801747072/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-583983 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1801747072/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.49:31568
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-583983 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-583983
localhost/kicbase/echo-server:functional-583983
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20241007-36f62932
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-583983 image ls --format short --alsologtostderr:
I1205 19:18:44.182073  548662 out.go:345] Setting OutFile to fd 1 ...
I1205 19:18:44.182243  548662 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 19:18:44.182257  548662 out.go:358] Setting ErrFile to fd 2...
I1205 19:18:44.182265  548662 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 19:18:44.182463  548662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
I1205 19:18:44.183111  548662 config.go:182] Loaded profile config "functional-583983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 19:18:44.183243  548662 config.go:182] Loaded profile config "functional-583983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 19:18:44.183661  548662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1205 19:18:44.183716  548662 main.go:141] libmachine: Launching plugin server for driver kvm2
I1205 19:18:44.205008  548662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44149
I1205 19:18:44.205596  548662 main.go:141] libmachine: () Calling .GetVersion
I1205 19:18:44.206387  548662 main.go:141] libmachine: Using API Version  1
I1205 19:18:44.206440  548662 main.go:141] libmachine: () Calling .SetConfigRaw
I1205 19:18:44.206810  548662 main.go:141] libmachine: () Calling .GetMachineName
I1205 19:18:44.207004  548662 main.go:141] libmachine: (functional-583983) Calling .GetState
I1205 19:18:44.209816  548662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1205 19:18:44.209872  548662 main.go:141] libmachine: Launching plugin server for driver kvm2
I1205 19:18:44.226001  548662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37615
I1205 19:18:44.226570  548662 main.go:141] libmachine: () Calling .GetVersion
I1205 19:18:44.227082  548662 main.go:141] libmachine: Using API Version  1
I1205 19:18:44.227100  548662 main.go:141] libmachine: () Calling .SetConfigRaw
I1205 19:18:44.227390  548662 main.go:141] libmachine: () Calling .GetMachineName
I1205 19:18:44.227646  548662 main.go:141] libmachine: (functional-583983) Calling .DriverName
I1205 19:18:44.227917  548662 ssh_runner.go:195] Run: systemctl --version
I1205 19:18:44.227961  548662 main.go:141] libmachine: (functional-583983) Calling .GetSSHHostname
I1205 19:18:44.231119  548662 main.go:141] libmachine: (functional-583983) DBG | domain functional-583983 has defined MAC address 52:54:00:d5:db:3f in network mk-functional-583983
I1205 19:18:44.231633  548662 main.go:141] libmachine: (functional-583983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:db:3f", ip: ""} in network mk-functional-583983: {Iface:virbr1 ExpiryTime:2024-12-05 20:15:44 +0000 UTC Type:0 Mac:52:54:00:d5:db:3f Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:functional-583983 Clientid:01:52:54:00:d5:db:3f}
I1205 19:18:44.231666  548662 main.go:141] libmachine: (functional-583983) DBG | domain functional-583983 has defined IP address 192.168.39.49 and MAC address 52:54:00:d5:db:3f in network mk-functional-583983
I1205 19:18:44.231964  548662 main.go:141] libmachine: (functional-583983) Calling .GetSSHPort
I1205 19:18:44.232198  548662 main.go:141] libmachine: (functional-583983) Calling .GetSSHKeyPath
I1205 19:18:44.232421  548662 main.go:141] libmachine: (functional-583983) Calling .GetSSHUsername
I1205 19:18:44.232660  548662 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/functional-583983/id_rsa Username:docker}
I1205 19:18:44.337298  548662 ssh_runner.go:195] Run: sudo crictl images --output json
I1205 19:18:44.436378  548662 main.go:141] libmachine: Making call to close driver server
I1205 19:18:44.436401  548662 main.go:141] libmachine: (functional-583983) Calling .Close
I1205 19:18:44.437300  548662 main.go:141] libmachine: Successfully made call to close driver server
I1205 19:18:44.437385  548662 main.go:141] libmachine: Making call to close connection to plugin binary
I1205 19:18:44.437414  548662 main.go:141] libmachine: Making call to close driver server
I1205 19:18:44.437433  548662 main.go:141] libmachine: (functional-583983) Calling .Close
I1205 19:18:44.437286  548662 main.go:141] libmachine: (functional-583983) DBG | Closing plugin on server side
I1205 19:18:44.437853  548662 main.go:141] libmachine: (functional-583983) DBG | Closing plugin on server side
I1205 19:18:44.437892  548662 main.go:141] libmachine: Successfully made call to close driver server
I1205 19:18:44.437911  548662 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-583983 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 3a5bc24055c9e | 95MB   |
| docker.io/library/nginx                 | latest             | 66f8bdd3810c9 | 196MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-scheduler          | v1.31.2            | 847c7bc1a5418 | 68.5MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/kube-controller-manager | v1.31.2            | 0486b6c53a1b5 | 89.5MB |
| registry.k8s.io/kube-proxy              | v1.31.2            | 505d571f5fd56 | 92.8MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-583983  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-583983  | 466072235f051 | 3.33kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.31.2            | 9499c9960544e | 95.3MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-583983 image ls --format table --alsologtostderr:
I1205 19:18:44.934488  548762 out.go:345] Setting OutFile to fd 1 ...
I1205 19:18:44.934667  548762 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 19:18:44.934680  548762 out.go:358] Setting ErrFile to fd 2...
I1205 19:18:44.934687  548762 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 19:18:44.934953  548762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
I1205 19:18:44.935668  548762 config.go:182] Loaded profile config "functional-583983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 19:18:44.935774  548762 config.go:182] Loaded profile config "functional-583983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 19:18:44.936154  548762 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1205 19:18:44.936198  548762 main.go:141] libmachine: Launching plugin server for driver kvm2
I1205 19:18:44.958898  548762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37439
I1205 19:18:44.959758  548762 main.go:141] libmachine: () Calling .GetVersion
I1205 19:18:44.960604  548762 main.go:141] libmachine: Using API Version  1
I1205 19:18:44.960638  548762 main.go:141] libmachine: () Calling .SetConfigRaw
I1205 19:18:44.961110  548762 main.go:141] libmachine: () Calling .GetMachineName
I1205 19:18:44.961351  548762 main.go:141] libmachine: (functional-583983) Calling .GetState
I1205 19:18:44.963847  548762 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1205 19:18:44.963952  548762 main.go:141] libmachine: Launching plugin server for driver kvm2
I1205 19:18:44.981735  548762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41959
I1205 19:18:44.982353  548762 main.go:141] libmachine: () Calling .GetVersion
I1205 19:18:44.983022  548762 main.go:141] libmachine: Using API Version  1
I1205 19:18:44.983050  548762 main.go:141] libmachine: () Calling .SetConfigRaw
I1205 19:18:44.983476  548762 main.go:141] libmachine: () Calling .GetMachineName
I1205 19:18:44.983661  548762 main.go:141] libmachine: (functional-583983) Calling .DriverName
I1205 19:18:44.983856  548762 ssh_runner.go:195] Run: systemctl --version
I1205 19:18:44.983893  548762 main.go:141] libmachine: (functional-583983) Calling .GetSSHHostname
I1205 19:18:44.987598  548762 main.go:141] libmachine: (functional-583983) DBG | domain functional-583983 has defined MAC address 52:54:00:d5:db:3f in network mk-functional-583983
I1205 19:18:44.988201  548762 main.go:141] libmachine: (functional-583983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:db:3f", ip: ""} in network mk-functional-583983: {Iface:virbr1 ExpiryTime:2024-12-05 20:15:44 +0000 UTC Type:0 Mac:52:54:00:d5:db:3f Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:functional-583983 Clientid:01:52:54:00:d5:db:3f}
I1205 19:18:44.988224  548762 main.go:141] libmachine: (functional-583983) DBG | domain functional-583983 has defined IP address 192.168.39.49 and MAC address 52:54:00:d5:db:3f in network mk-functional-583983
I1205 19:18:44.988499  548762 main.go:141] libmachine: (functional-583983) Calling .GetSSHPort
I1205 19:18:44.988686  548762 main.go:141] libmachine: (functional-583983) Calling .GetSSHKeyPath
I1205 19:18:44.988850  548762 main.go:141] libmachine: (functional-583983) Calling .GetSSHUsername
I1205 19:18:44.989004  548762 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/functional-583983/id_rsa Username:docker}
I1205 19:18:45.133541  548762 ssh_runner.go:195] Run: sudo crictl images --output json
I1205 19:18:45.407082  548762 main.go:141] libmachine: Making call to close driver server
I1205 19:18:45.407104  548762 main.go:141] libmachine: (functional-583983) Calling .Close
I1205 19:18:45.407483  548762 main.go:141] libmachine: (functional-583983) DBG | Closing plugin on server side
I1205 19:18:45.407493  548762 main.go:141] libmachine: Successfully made call to close driver server
I1205 19:18:45.407521  548762 main.go:141] libmachine: Making call to close connection to plugin binary
I1205 19:18:45.407536  548762 main.go:141] libmachine: Making call to close driver server
I1205 19:18:45.407548  548762 main.go:141] libmachine: (functional-583983) Calling .Close
I1205 19:18:45.407789  548762 main.go:141] libmachine: Successfully made call to close driver server
I1205 19:18:45.407803  548762 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-583983 image ls --format json --alsologtostderr:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":["registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b","registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["
registry.k8s.io/kube-proxy:v1.31.2"],"size":"92783513"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"66f8bdd3810c96dc5c28aec39583af
731b34a2cd99471530f53c8794ed5b423e","repoDigests":["docker.io/library/nginx@sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42","docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be"],"repoTags":["docker.io/library/nginx:latest"],"size":"195919252"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"466072235f0516ae44652651373659d35717d5ca235adf79f95e6e3eb860495d","repoDigests":["localhost/minikube-local-cache-test@sha256:032387b7728f43faad6ab7ecb565f8db2385a30e9ea3b24c4d4a90de606715cf"],"repoTags":["localhost/minikube-local-cache-test:functional-583983"],"size":"3330"},{"id":"c69fa2e9cbf5f42dc48af631e9
56d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c","registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"89474374"},{"id":"847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d35749
49b12df928fe5"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"68457798"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"94965812"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","r
epoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-583983"],"size":"4943877"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0","registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"95274464"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","do
cker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-583983 image ls --format json --alsologtostderr:
I1205 19:18:44.506295  548708 out.go:345] Setting OutFile to fd 1 ...
I1205 19:18:44.506436  548708 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 19:18:44.506448  548708 out.go:358] Setting ErrFile to fd 2...
I1205 19:18:44.506455  548708 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 19:18:44.506771  548708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
I1205 19:18:44.507717  548708 config.go:182] Loaded profile config "functional-583983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 19:18:44.507887  548708 config.go:182] Loaded profile config "functional-583983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 19:18:44.508534  548708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1205 19:18:44.508596  548708 main.go:141] libmachine: Launching plugin server for driver kvm2
I1205 19:18:44.526278  548708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33411
I1205 19:18:44.526808  548708 main.go:141] libmachine: () Calling .GetVersion
I1205 19:18:44.527516  548708 main.go:141] libmachine: Using API Version  1
I1205 19:18:44.527547  548708 main.go:141] libmachine: () Calling .SetConfigRaw
I1205 19:18:44.527991  548708 main.go:141] libmachine: () Calling .GetMachineName
I1205 19:18:44.528222  548708 main.go:141] libmachine: (functional-583983) Calling .GetState
I1205 19:18:44.530239  548708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1205 19:18:44.530296  548708 main.go:141] libmachine: Launching plugin server for driver kvm2
I1205 19:18:44.545462  548708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43339
I1205 19:18:44.546141  548708 main.go:141] libmachine: () Calling .GetVersion
I1205 19:18:44.546874  548708 main.go:141] libmachine: Using API Version  1
I1205 19:18:44.546906  548708 main.go:141] libmachine: () Calling .SetConfigRaw
I1205 19:18:44.547283  548708 main.go:141] libmachine: () Calling .GetMachineName
I1205 19:18:44.547464  548708 main.go:141] libmachine: (functional-583983) Calling .DriverName
I1205 19:18:44.547658  548708 ssh_runner.go:195] Run: systemctl --version
I1205 19:18:44.547683  548708 main.go:141] libmachine: (functional-583983) Calling .GetSSHHostname
I1205 19:18:44.550739  548708 main.go:141] libmachine: (functional-583983) DBG | domain functional-583983 has defined MAC address 52:54:00:d5:db:3f in network mk-functional-583983
I1205 19:18:44.551243  548708 main.go:141] libmachine: (functional-583983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:db:3f", ip: ""} in network mk-functional-583983: {Iface:virbr1 ExpiryTime:2024-12-05 20:15:44 +0000 UTC Type:0 Mac:52:54:00:d5:db:3f Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:functional-583983 Clientid:01:52:54:00:d5:db:3f}
I1205 19:18:44.551271  548708 main.go:141] libmachine: (functional-583983) DBG | domain functional-583983 has defined IP address 192.168.39.49 and MAC address 52:54:00:d5:db:3f in network mk-functional-583983
I1205 19:18:44.551453  548708 main.go:141] libmachine: (functional-583983) Calling .GetSSHPort
I1205 19:18:44.551635  548708 main.go:141] libmachine: (functional-583983) Calling .GetSSHKeyPath
I1205 19:18:44.551797  548708 main.go:141] libmachine: (functional-583983) Calling .GetSSHUsername
I1205 19:18:44.552039  548708 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/functional-583983/id_rsa Username:docker}
I1205 19:18:44.671437  548708 ssh_runner.go:195] Run: sudo crictl images --output json
I1205 19:18:44.850586  548708 main.go:141] libmachine: Making call to close driver server
I1205 19:18:44.850605  548708 main.go:141] libmachine: (functional-583983) Calling .Close
I1205 19:18:44.851040  548708 main.go:141] libmachine: Successfully made call to close driver server
I1205 19:18:44.851063  548708 main.go:141] libmachine: (functional-583983) DBG | Closing plugin on server side
I1205 19:18:44.851079  548708 main.go:141] libmachine: Making call to close connection to plugin binary
I1205 19:18:44.851100  548708 main.go:141] libmachine: Making call to close driver server
I1205 19:18:44.851110  548708 main.go:141] libmachine: (functional-583983) Calling .Close
I1205 19:18:44.851447  548708 main.go:141] libmachine: Successfully made call to close driver server
I1205 19:18:44.851448  548708 main.go:141] libmachine: (functional-583983) DBG | Closing plugin on server side
I1205 19:18:44.851477  548708 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-583983 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "94965812"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-583983
size: "4943877"
- id: 466072235f0516ae44652651373659d35717d5ca235adf79f95e6e3eb860495d
repoDigests:
- localhost/minikube-local-cache-test@sha256:032387b7728f43faad6ab7ecb565f8db2385a30e9ea3b24c4d4a90de606715cf
repoTags:
- localhost/minikube-local-cache-test:functional-583983
size: "3330"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
- registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "95274464"
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "92783513"
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "68457798"
- id: 66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e
repoDigests:
- docker.io/library/nginx@sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42
- docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be
repoTags:
- docker.io/library/nginx:latest
size: "195919252"
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "89474374"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-583983 image ls --format yaml --alsologtostderr:
I1205 19:18:44.188090  548663 out.go:345] Setting OutFile to fd 1 ...
I1205 19:18:44.188314  548663 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 19:18:44.188326  548663 out.go:358] Setting ErrFile to fd 2...
I1205 19:18:44.188331  548663 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 19:18:44.188548  548663 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
I1205 19:18:44.189402  548663 config.go:182] Loaded profile config "functional-583983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 19:18:44.189561  548663 config.go:182] Loaded profile config "functional-583983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 19:18:44.190013  548663 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1205 19:18:44.190069  548663 main.go:141] libmachine: Launching plugin server for driver kvm2
I1205 19:18:44.205785  548663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44499
I1205 19:18:44.206437  548663 main.go:141] libmachine: () Calling .GetVersion
I1205 19:18:44.207155  548663 main.go:141] libmachine: Using API Version  1
I1205 19:18:44.207181  548663 main.go:141] libmachine: () Calling .SetConfigRaw
I1205 19:18:44.207619  548663 main.go:141] libmachine: () Calling .GetMachineName
I1205 19:18:44.207831  548663 main.go:141] libmachine: (functional-583983) Calling .GetState
I1205 19:18:44.210365  548663 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1205 19:18:44.210422  548663 main.go:141] libmachine: Launching plugin server for driver kvm2
I1205 19:18:44.225707  548663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43223
I1205 19:18:44.226412  548663 main.go:141] libmachine: () Calling .GetVersion
I1205 19:18:44.227537  548663 main.go:141] libmachine: Using API Version  1
I1205 19:18:44.227551  548663 main.go:141] libmachine: () Calling .SetConfigRaw
I1205 19:18:44.227918  548663 main.go:141] libmachine: () Calling .GetMachineName
I1205 19:18:44.228126  548663 main.go:141] libmachine: (functional-583983) Calling .DriverName
I1205 19:18:44.228315  548663 ssh_runner.go:195] Run: systemctl --version
I1205 19:18:44.228342  548663 main.go:141] libmachine: (functional-583983) Calling .GetSSHHostname
I1205 19:18:44.231414  548663 main.go:141] libmachine: (functional-583983) DBG | domain functional-583983 has defined MAC address 52:54:00:d5:db:3f in network mk-functional-583983
I1205 19:18:44.231891  548663 main.go:141] libmachine: (functional-583983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:db:3f", ip: ""} in network mk-functional-583983: {Iface:virbr1 ExpiryTime:2024-12-05 20:15:44 +0000 UTC Type:0 Mac:52:54:00:d5:db:3f Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:functional-583983 Clientid:01:52:54:00:d5:db:3f}
I1205 19:18:44.231926  548663 main.go:141] libmachine: (functional-583983) DBG | domain functional-583983 has defined IP address 192.168.39.49 and MAC address 52:54:00:d5:db:3f in network mk-functional-583983
I1205 19:18:44.232070  548663 main.go:141] libmachine: (functional-583983) Calling .GetSSHPort
I1205 19:18:44.232237  548663 main.go:141] libmachine: (functional-583983) Calling .GetSSHKeyPath
I1205 19:18:44.232416  548663 main.go:141] libmachine: (functional-583983) Calling .GetSSHUsername
I1205 19:18:44.232558  548663 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/functional-583983/id_rsa Username:docker}
I1205 19:18:44.343080  548663 ssh_runner.go:195] Run: sudo crictl images --output json
I1205 19:18:44.558066  548663 main.go:141] libmachine: Making call to close driver server
I1205 19:18:44.558078  548663 main.go:141] libmachine: (functional-583983) Calling .Close
I1205 19:18:44.558350  548663 main.go:141] libmachine: (functional-583983) DBG | Closing plugin on server side
I1205 19:18:44.558402  548663 main.go:141] libmachine: Successfully made call to close driver server
I1205 19:18:44.558412  548663 main.go:141] libmachine: Making call to close connection to plugin binary
I1205 19:18:44.558417  548663 main.go:141] libmachine: Making call to close driver server
I1205 19:18:44.558422  548663 main.go:141] libmachine: (functional-583983) Calling .Close
I1205 19:18:44.558650  548663 main.go:141] libmachine: Successfully made call to close driver server
I1205 19:18:44.558669  548663 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (12.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-583983 ssh pgrep buildkitd: exit status 1 (302.04661ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 image build -t localhost/my-image:functional-583983 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-583983 image build -t localhost/my-image:functional-583983 testdata/build --alsologtostderr: (11.63654846s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-583983 image build -t localhost/my-image:functional-583983 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 42f64ad714f
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-583983
--> e521902f404
Successfully tagged localhost/my-image:functional-583983
e521902f4043e6974ad93c7641fa41cc6e9600377cfd68ffb1fb37971ea2f693
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-583983 image build -t localhost/my-image:functional-583983 testdata/build --alsologtostderr:
I1205 19:18:44.950087  548768 out.go:345] Setting OutFile to fd 1 ...
I1205 19:18:44.950303  548768 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 19:18:44.950334  548768 out.go:358] Setting ErrFile to fd 2...
I1205 19:18:44.950347  548768 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 19:18:44.950575  548768 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
I1205 19:18:44.951269  548768 config.go:182] Loaded profile config "functional-583983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 19:18:44.952002  548768 config.go:182] Loaded profile config "functional-583983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 19:18:44.952477  548768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1205 19:18:44.952532  548768 main.go:141] libmachine: Launching plugin server for driver kvm2
I1205 19:18:44.970876  548768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34529
I1205 19:18:44.971580  548768 main.go:141] libmachine: () Calling .GetVersion
I1205 19:18:44.972247  548768 main.go:141] libmachine: Using API Version  1
I1205 19:18:44.972289  548768 main.go:141] libmachine: () Calling .SetConfigRaw
I1205 19:18:44.972748  548768 main.go:141] libmachine: () Calling .GetMachineName
I1205 19:18:44.972970  548768 main.go:141] libmachine: (functional-583983) Calling .GetState
I1205 19:18:44.975137  548768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1205 19:18:44.975186  548768 main.go:141] libmachine: Launching plugin server for driver kvm2
I1205 19:18:44.992235  548768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37905
I1205 19:18:44.992721  548768 main.go:141] libmachine: () Calling .GetVersion
I1205 19:18:44.993245  548768 main.go:141] libmachine: Using API Version  1
I1205 19:18:44.993266  548768 main.go:141] libmachine: () Calling .SetConfigRaw
I1205 19:18:44.993559  548768 main.go:141] libmachine: () Calling .GetMachineName
I1205 19:18:44.993737  548768 main.go:141] libmachine: (functional-583983) Calling .DriverName
I1205 19:18:44.993925  548768 ssh_runner.go:195] Run: systemctl --version
I1205 19:18:44.993958  548768 main.go:141] libmachine: (functional-583983) Calling .GetSSHHostname
I1205 19:18:44.996723  548768 main.go:141] libmachine: (functional-583983) DBG | domain functional-583983 has defined MAC address 52:54:00:d5:db:3f in network mk-functional-583983
I1205 19:18:44.997240  548768 main.go:141] libmachine: (functional-583983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:db:3f", ip: ""} in network mk-functional-583983: {Iface:virbr1 ExpiryTime:2024-12-05 20:15:44 +0000 UTC Type:0 Mac:52:54:00:d5:db:3f Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:functional-583983 Clientid:01:52:54:00:d5:db:3f}
I1205 19:18:44.997323  548768 main.go:141] libmachine: (functional-583983) DBG | domain functional-583983 has defined IP address 192.168.39.49 and MAC address 52:54:00:d5:db:3f in network mk-functional-583983
I1205 19:18:44.997571  548768 main.go:141] libmachine: (functional-583983) Calling .GetSSHPort
I1205 19:18:44.997718  548768 main.go:141] libmachine: (functional-583983) Calling .GetSSHKeyPath
I1205 19:18:44.997878  548768 main.go:141] libmachine: (functional-583983) Calling .GetSSHUsername
I1205 19:18:44.998001  548768 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/functional-583983/id_rsa Username:docker}
I1205 19:18:45.126997  548768 build_images.go:161] Building image from path: /tmp/build.3390370660.tar
I1205 19:18:45.127098  548768 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1205 19:18:45.150568  548768 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3390370660.tar
I1205 19:18:45.178930  548768 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3390370660.tar: stat -c "%s %y" /var/lib/minikube/build/build.3390370660.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3390370660.tar': No such file or directory
I1205 19:18:45.178986  548768 ssh_runner.go:362] scp /tmp/build.3390370660.tar --> /var/lib/minikube/build/build.3390370660.tar (3072 bytes)
I1205 19:18:45.255823  548768 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3390370660
I1205 19:18:45.289291  548768 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3390370660 -xf /var/lib/minikube/build/build.3390370660.tar
I1205 19:18:45.323282  548768 crio.go:315] Building image: /var/lib/minikube/build/build.3390370660
I1205 19:18:45.323373  548768 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-583983 /var/lib/minikube/build/build.3390370660 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1205 19:18:56.475357  548768 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-583983 /var/lib/minikube/build/build.3390370660 --cgroup-manager=cgroupfs: (11.151954763s)
I1205 19:18:56.475450  548768 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3390370660
I1205 19:18:56.487107  548768 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3390370660.tar
I1205 19:18:56.498423  548768 build_images.go:217] Built localhost/my-image:functional-583983 from /tmp/build.3390370660.tar
I1205 19:18:56.498462  548768 build_images.go:133] succeeded building to: functional-583983
I1205 19:18:56.498466  548768 build_images.go:134] failed building to: 
I1205 19:18:56.498496  548768 main.go:141] libmachine: Making call to close driver server
I1205 19:18:56.498511  548768 main.go:141] libmachine: (functional-583983) Calling .Close
I1205 19:18:56.498821  548768 main.go:141] libmachine: Successfully made call to close driver server
I1205 19:18:56.498841  548768 main.go:141] libmachine: Making call to close connection to plugin binary
I1205 19:18:56.498863  548768 main.go:141] libmachine: (functional-583983) DBG | Closing plugin on server side
I1205 19:18:56.498893  548768 main.go:141] libmachine: Making call to close driver server
I1205 19:18:56.498911  548768 main.go:141] libmachine: (functional-583983) Calling .Close
I1205 19:18:56.499190  548768 main.go:141] libmachine: Successfully made call to close driver server
I1205 19:18:56.499210  548768 main.go:141] libmachine: (functional-583983) DBG | Closing plugin on server side
I1205 19:18:56.499215  548768 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (12.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.844361696s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-583983
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 image load --daemon kicbase/echo-server:functional-583983 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-583983 image load --daemon kicbase/echo-server:functional-583983 --alsologtostderr: (2.793399547s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 image ls
E1205 19:18:35.245058  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 image load --daemon kicbase/echo-server:functional-583983 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
2024/12/05 19:18:36 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-583983
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 image load --daemon kicbase/echo-server:functional-583983 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 image save kicbase/echo-server:functional-583983 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-583983 image save kicbase/echo-server:functional-583983 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (3.509622469s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 image rm kicbase/echo-server:functional-583983 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-583983
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-583983 image save --daemon kicbase/echo-server:functional-583983 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-583983
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-583983
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-583983
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-583983
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (206.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-106302 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1205 19:20:51.381155  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:21:19.086389  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-106302 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m25.799442626s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (206.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-106302 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-106302 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-106302 -- rollout status deployment/busybox: (5.193371222s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-106302 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-106302 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-106302 -- exec busybox-7dff88458-9kxtc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-106302 -- exec busybox-7dff88458-9tp62 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-106302 -- exec busybox-7dff88458-p8z47 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-106302 -- exec busybox-7dff88458-9kxtc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-106302 -- exec busybox-7dff88458-9tp62 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-106302 -- exec busybox-7dff88458-p8z47 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-106302 -- exec busybox-7dff88458-9kxtc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-106302 -- exec busybox-7dff88458-9tp62 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-106302 -- exec busybox-7dff88458-p8z47 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-106302 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-106302 -- exec busybox-7dff88458-9kxtc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-106302 -- exec busybox-7dff88458-9kxtc -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-106302 -- exec busybox-7dff88458-9tp62 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-106302 -- exec busybox-7dff88458-9tp62 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-106302 -- exec busybox-7dff88458-p8z47 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-106302 -- exec busybox-7dff88458-p8z47 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-106302 -v=7 --alsologtostderr
E1205 19:23:15.011987  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:23:15.018386  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:23:15.029850  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:23:15.051296  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:23:15.092839  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:23:15.174393  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:23:15.336006  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:23:15.657325  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:23:16.298710  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:23:17.581226  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:23:20.142969  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:23:25.265130  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
E1205 19:23:35.506508  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-106302 -v=7 --alsologtostderr: (56.387962149s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-106302 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 cp testdata/cp-test.txt ha-106302:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 cp ha-106302:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile42720673/001/cp-test_ha-106302.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 cp ha-106302:/home/docker/cp-test.txt ha-106302-m02:/home/docker/cp-test_ha-106302_ha-106302-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302-m02 "sudo cat /home/docker/cp-test_ha-106302_ha-106302-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 cp ha-106302:/home/docker/cp-test.txt ha-106302-m03:/home/docker/cp-test_ha-106302_ha-106302-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302-m03 "sudo cat /home/docker/cp-test_ha-106302_ha-106302-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 cp ha-106302:/home/docker/cp-test.txt ha-106302-m04:/home/docker/cp-test_ha-106302_ha-106302-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302-m04 "sudo cat /home/docker/cp-test_ha-106302_ha-106302-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 cp testdata/cp-test.txt ha-106302-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 cp ha-106302-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile42720673/001/cp-test_ha-106302-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 cp ha-106302-m02:/home/docker/cp-test.txt ha-106302:/home/docker/cp-test_ha-106302-m02_ha-106302.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302 "sudo cat /home/docker/cp-test_ha-106302-m02_ha-106302.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 cp ha-106302-m02:/home/docker/cp-test.txt ha-106302-m03:/home/docker/cp-test_ha-106302-m02_ha-106302-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302-m03 "sudo cat /home/docker/cp-test_ha-106302-m02_ha-106302-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 cp ha-106302-m02:/home/docker/cp-test.txt ha-106302-m04:/home/docker/cp-test_ha-106302-m02_ha-106302-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302-m04 "sudo cat /home/docker/cp-test_ha-106302-m02_ha-106302-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 cp testdata/cp-test.txt ha-106302-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 cp ha-106302-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile42720673/001/cp-test_ha-106302-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 cp ha-106302-m03:/home/docker/cp-test.txt ha-106302:/home/docker/cp-test_ha-106302-m03_ha-106302.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302 "sudo cat /home/docker/cp-test_ha-106302-m03_ha-106302.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 cp ha-106302-m03:/home/docker/cp-test.txt ha-106302-m02:/home/docker/cp-test_ha-106302-m03_ha-106302-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302-m02 "sudo cat /home/docker/cp-test_ha-106302-m03_ha-106302-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 cp ha-106302-m03:/home/docker/cp-test.txt ha-106302-m04:/home/docker/cp-test_ha-106302-m03_ha-106302-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302-m04 "sudo cat /home/docker/cp-test_ha-106302-m03_ha-106302-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 cp testdata/cp-test.txt ha-106302-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile42720673/001/cp-test_ha-106302-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt ha-106302:/home/docker/cp-test_ha-106302-m04_ha-106302.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302 "sudo cat /home/docker/cp-test_ha-106302-m04_ha-106302.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt ha-106302-m02:/home/docker/cp-test_ha-106302-m04_ha-106302-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302-m02 "sudo cat /home/docker/cp-test_ha-106302-m04_ha-106302-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 cp ha-106302-m04:/home/docker/cp-test.txt ha-106302-m03:/home/docker/cp-test_ha-106302-m04_ha-106302-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 ssh -n ha-106302-m03 "sudo cat /home/docker/cp-test_ha-106302-m04_ha-106302-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-106302 node delete m03 -v=7 --alsologtostderr: (16.168053351s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-106302 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                    
x
+
TestJSONOutput/start/Command (87.56s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-061833 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-061833 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m27.557465147s)
--- PASS: TestJSONOutput/start/Command (87.56s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-061833 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-061833 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.38s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-061833 --output=json --user=testUser
E1205 19:50:51.384076  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-061833 --output=json --user=testUser: (7.382406625s)
--- PASS: TestJSONOutput/stop/Command (7.38s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-953161 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-953161 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (68.37026ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"94d9b86b-4a2e-44bd-89e9-c92c61ce6b75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-953161] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fbae782e-2234-4fd8-91e8-b91a5f69e570","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20052"}}
	{"specversion":"1.0","id":"2218409b-5ff7-4aae-a9f0-d5f59403a259","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"350e69ab-d53b-4faa-b5f6-6e793ef2c83c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig"}}
	{"specversion":"1.0","id":"b77d805b-59e5-4278-b65e-0c368a6c1423","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube"}}
	{"specversion":"1.0","id":"3a5cc737-536d-41b6-8d40-c250e383439e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"519df17e-4b6a-404b-8a27-8804ffedd6f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f21b8271-445b-4309-be00-5b84e9e7efdf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-953161" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-953161
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (89.65s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-070453 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-070453 --driver=kvm2  --container-runtime=crio: (42.128811427s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-082567 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-082567 --driver=kvm2  --container-runtime=crio: (44.625423146s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-070453
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-082567
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-082567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-082567
helpers_test.go:175: Cleaning up "first-070453" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-070453
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-070453: (1.000912276s)
--- PASS: TestMinikubeProfile (89.65s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (31.45s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-398890 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-398890 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (30.448086469s)
--- PASS: TestMountStart/serial/StartWithMountFirst (31.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-398890 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-398890 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.35s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-416075 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1205 19:53:15.014709  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-416075 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.351010449s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-416075 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-416075 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-398890 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-416075 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-416075 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-416075
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-416075: (1.342204277s)
--- PASS: TestMountStart/serial/Stop (1.34s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.12s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-416075
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-416075: (22.122898939s)
--- PASS: TestMountStart/serial/RestartStopped (23.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-416075 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-416075 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (114.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-346389 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-346389 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m54.547522339s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (114.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-346389 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-346389 -- rollout status deployment/busybox
E1205 19:55:51.380955  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-346389 -- rollout status deployment/busybox: (5.683461146s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-346389 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-346389 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-346389 -- exec busybox-7dff88458-g4r7j -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-346389 -- exec busybox-7dff88458-qbp6t -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-346389 -- exec busybox-7dff88458-g4r7j -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-346389 -- exec busybox-7dff88458-qbp6t -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-346389 -- exec busybox-7dff88458-g4r7j -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-346389 -- exec busybox-7dff88458-qbp6t -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.27s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-346389 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-346389 -- exec busybox-7dff88458-g4r7j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-346389 -- exec busybox-7dff88458-g4r7j -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-346389 -- exec busybox-7dff88458-qbp6t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-346389 -- exec busybox-7dff88458-qbp6t -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-346389 -v 3 --alsologtostderr
E1205 19:56:18.078124  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-346389 -v 3 --alsologtostderr: (50.323970483s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.93s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-346389 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.60s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 cp testdata/cp-test.txt multinode-346389:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 ssh -n multinode-346389 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 cp multinode-346389:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile122835969/001/cp-test_multinode-346389.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 ssh -n multinode-346389 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 cp multinode-346389:/home/docker/cp-test.txt multinode-346389-m02:/home/docker/cp-test_multinode-346389_multinode-346389-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 ssh -n multinode-346389 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 ssh -n multinode-346389-m02 "sudo cat /home/docker/cp-test_multinode-346389_multinode-346389-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 cp multinode-346389:/home/docker/cp-test.txt multinode-346389-m03:/home/docker/cp-test_multinode-346389_multinode-346389-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 ssh -n multinode-346389 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 ssh -n multinode-346389-m03 "sudo cat /home/docker/cp-test_multinode-346389_multinode-346389-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 cp testdata/cp-test.txt multinode-346389-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 ssh -n multinode-346389-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 cp multinode-346389-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile122835969/001/cp-test_multinode-346389-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 ssh -n multinode-346389-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 cp multinode-346389-m02:/home/docker/cp-test.txt multinode-346389:/home/docker/cp-test_multinode-346389-m02_multinode-346389.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 ssh -n multinode-346389-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 ssh -n multinode-346389 "sudo cat /home/docker/cp-test_multinode-346389-m02_multinode-346389.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 cp multinode-346389-m02:/home/docker/cp-test.txt multinode-346389-m03:/home/docker/cp-test_multinode-346389-m02_multinode-346389-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 ssh -n multinode-346389-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 ssh -n multinode-346389-m03 "sudo cat /home/docker/cp-test_multinode-346389-m02_multinode-346389-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 cp testdata/cp-test.txt multinode-346389-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 ssh -n multinode-346389-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 cp multinode-346389-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile122835969/001/cp-test_multinode-346389-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 ssh -n multinode-346389-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 cp multinode-346389-m03:/home/docker/cp-test.txt multinode-346389:/home/docker/cp-test_multinode-346389-m03_multinode-346389.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 ssh -n multinode-346389-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 ssh -n multinode-346389 "sudo cat /home/docker/cp-test_multinode-346389-m03_multinode-346389.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 cp multinode-346389-m03:/home/docker/cp-test.txt multinode-346389-m02:/home/docker/cp-test_multinode-346389-m03_multinode-346389-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 ssh -n multinode-346389-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 ssh -n multinode-346389-m02 "sudo cat /home/docker/cp-test_multinode-346389-m03_multinode-346389-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.61s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-346389 node stop m03: (1.528101865s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-346389 status: exit status 7 (448.076512ms)

                                                
                                                
-- stdout --
	multinode-346389
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-346389-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-346389-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-346389 status --alsologtostderr: exit status 7 (442.298212ms)

                                                
                                                
-- stdout --
	multinode-346389
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-346389-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-346389-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 19:56:55.959707  566878 out.go:345] Setting OutFile to fd 1 ...
	I1205 19:56:55.960005  566878 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:56:55.960018  566878 out.go:358] Setting ErrFile to fd 2...
	I1205 19:56:55.960022  566878 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 19:56:55.960205  566878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 19:56:55.960418  566878 out.go:352] Setting JSON to false
	I1205 19:56:55.960452  566878 mustload.go:65] Loading cluster: multinode-346389
	I1205 19:56:55.960570  566878 notify.go:220] Checking for updates...
	I1205 19:56:55.960972  566878 config.go:182] Loaded profile config "multinode-346389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 19:56:55.961004  566878 status.go:174] checking status of multinode-346389 ...
	I1205 19:56:55.961562  566878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:56:55.961623  566878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:56:55.983254  566878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42815
	I1205 19:56:55.983769  566878 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:56:55.984439  566878 main.go:141] libmachine: Using API Version  1
	I1205 19:56:55.984464  566878 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:56:55.984896  566878 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:56:55.985099  566878 main.go:141] libmachine: (multinode-346389) Calling .GetState
	I1205 19:56:55.986884  566878 status.go:371] multinode-346389 host status = "Running" (err=<nil>)
	I1205 19:56:55.986902  566878 host.go:66] Checking if "multinode-346389" exists ...
	I1205 19:56:55.987239  566878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:56:55.987296  566878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:56:56.003206  566878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33741
	I1205 19:56:56.003746  566878 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:56:56.004408  566878 main.go:141] libmachine: Using API Version  1
	I1205 19:56:56.004442  566878 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:56:56.004784  566878 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:56:56.004998  566878 main.go:141] libmachine: (multinode-346389) Calling .GetIP
	I1205 19:56:56.007718  566878 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:56:56.008201  566878 main.go:141] libmachine: (multinode-346389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:79:33", ip: ""} in network mk-multinode-346389: {Iface:virbr1 ExpiryTime:2024-12-05 20:54:07 +0000 UTC Type:0 Mac:52:54:00:5c:79:33 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-346389 Clientid:01:52:54:00:5c:79:33}
	I1205 19:56:56.008238  566878 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined IP address 192.168.39.170 and MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:56:56.008416  566878 host.go:66] Checking if "multinode-346389" exists ...
	I1205 19:56:56.008699  566878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:56:56.008751  566878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:56:56.024454  566878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32933
	I1205 19:56:56.025013  566878 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:56:56.025533  566878 main.go:141] libmachine: Using API Version  1
	I1205 19:56:56.025560  566878 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:56:56.025910  566878 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:56:56.026102  566878 main.go:141] libmachine: (multinode-346389) Calling .DriverName
	I1205 19:56:56.026290  566878 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 19:56:56.026322  566878 main.go:141] libmachine: (multinode-346389) Calling .GetSSHHostname
	I1205 19:56:56.029229  566878 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:56:56.029641  566878 main.go:141] libmachine: (multinode-346389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:79:33", ip: ""} in network mk-multinode-346389: {Iface:virbr1 ExpiryTime:2024-12-05 20:54:07 +0000 UTC Type:0 Mac:52:54:00:5c:79:33 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:multinode-346389 Clientid:01:52:54:00:5c:79:33}
	I1205 19:56:56.029663  566878 main.go:141] libmachine: (multinode-346389) DBG | domain multinode-346389 has defined IP address 192.168.39.170 and MAC address 52:54:00:5c:79:33 in network mk-multinode-346389
	I1205 19:56:56.029781  566878 main.go:141] libmachine: (multinode-346389) Calling .GetSSHPort
	I1205 19:56:56.029949  566878 main.go:141] libmachine: (multinode-346389) Calling .GetSSHKeyPath
	I1205 19:56:56.030104  566878 main.go:141] libmachine: (multinode-346389) Calling .GetSSHUsername
	I1205 19:56:56.030245  566878 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/multinode-346389/id_rsa Username:docker}
	I1205 19:56:56.111857  566878 ssh_runner.go:195] Run: systemctl --version
	I1205 19:56:56.118640  566878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:56:56.133029  566878 kubeconfig.go:125] found "multinode-346389" server: "https://192.168.39.170:8443"
	I1205 19:56:56.133080  566878 api_server.go:166] Checking apiserver status ...
	I1205 19:56:56.133136  566878 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 19:56:56.147502  566878 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1057/cgroup
	W1205 19:56:56.158987  566878 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1057/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1205 19:56:56.159068  566878 ssh_runner.go:195] Run: ls
	I1205 19:56:56.163736  566878 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I1205 19:56:56.168171  566878 api_server.go:279] https://192.168.39.170:8443/healthz returned 200:
	ok
	I1205 19:56:56.168198  566878 status.go:463] multinode-346389 apiserver status = Running (err=<nil>)
	I1205 19:56:56.168208  566878 status.go:176] multinode-346389 status: &{Name:multinode-346389 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 19:56:56.168224  566878 status.go:174] checking status of multinode-346389-m02 ...
	I1205 19:56:56.168574  566878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:56:56.168608  566878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:56:56.184049  566878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34647
	I1205 19:56:56.184537  566878 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:56:56.185090  566878 main.go:141] libmachine: Using API Version  1
	I1205 19:56:56.185113  566878 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:56:56.185424  566878 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:56:56.185647  566878 main.go:141] libmachine: (multinode-346389-m02) Calling .GetState
	I1205 19:56:56.187229  566878 status.go:371] multinode-346389-m02 host status = "Running" (err=<nil>)
	I1205 19:56:56.187246  566878 host.go:66] Checking if "multinode-346389-m02" exists ...
	I1205 19:56:56.187584  566878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:56:56.187667  566878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:56:56.203541  566878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33185
	I1205 19:56:56.204034  566878 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:56:56.204591  566878 main.go:141] libmachine: Using API Version  1
	I1205 19:56:56.204616  566878 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:56:56.204995  566878 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:56:56.205270  566878 main.go:141] libmachine: (multinode-346389-m02) Calling .GetIP
	I1205 19:56:56.207858  566878 main.go:141] libmachine: (multinode-346389-m02) DBG | domain multinode-346389-m02 has defined MAC address 52:54:00:c0:56:98 in network mk-multinode-346389
	I1205 19:56:56.208343  566878 main.go:141] libmachine: (multinode-346389-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:98", ip: ""} in network mk-multinode-346389: {Iface:virbr1 ExpiryTime:2024-12-05 20:55:10 +0000 UTC Type:0 Mac:52:54:00:c0:56:98 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-346389-m02 Clientid:01:52:54:00:c0:56:98}
	I1205 19:56:56.208372  566878 main.go:141] libmachine: (multinode-346389-m02) DBG | domain multinode-346389-m02 has defined IP address 192.168.39.26 and MAC address 52:54:00:c0:56:98 in network mk-multinode-346389
	I1205 19:56:56.208531  566878 host.go:66] Checking if "multinode-346389-m02" exists ...
	I1205 19:56:56.208993  566878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:56:56.209045  566878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:56:56.224971  566878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33435
	I1205 19:56:56.225494  566878 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:56:56.226047  566878 main.go:141] libmachine: Using API Version  1
	I1205 19:56:56.226079  566878 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:56:56.226438  566878 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:56:56.226616  566878 main.go:141] libmachine: (multinode-346389-m02) Calling .DriverName
	I1205 19:56:56.226775  566878 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 19:56:56.226793  566878 main.go:141] libmachine: (multinode-346389-m02) Calling .GetSSHHostname
	I1205 19:56:56.229731  566878 main.go:141] libmachine: (multinode-346389-m02) DBG | domain multinode-346389-m02 has defined MAC address 52:54:00:c0:56:98 in network mk-multinode-346389
	I1205 19:56:56.230188  566878 main.go:141] libmachine: (multinode-346389-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:56:98", ip: ""} in network mk-multinode-346389: {Iface:virbr1 ExpiryTime:2024-12-05 20:55:10 +0000 UTC Type:0 Mac:52:54:00:c0:56:98 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-346389-m02 Clientid:01:52:54:00:c0:56:98}
	I1205 19:56:56.230208  566878 main.go:141] libmachine: (multinode-346389-m02) DBG | domain multinode-346389-m02 has defined IP address 192.168.39.26 and MAC address 52:54:00:c0:56:98 in network mk-multinode-346389
	I1205 19:56:56.230377  566878 main.go:141] libmachine: (multinode-346389-m02) Calling .GetSSHPort
	I1205 19:56:56.230534  566878 main.go:141] libmachine: (multinode-346389-m02) Calling .GetSSHKeyPath
	I1205 19:56:56.230784  566878 main.go:141] libmachine: (multinode-346389-m02) Calling .GetSSHUsername
	I1205 19:56:56.230908  566878 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20052-530897/.minikube/machines/multinode-346389-m02/id_rsa Username:docker}
	I1205 19:56:56.316062  566878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:56:56.330748  566878 status.go:176] multinode-346389-m02 status: &{Name:multinode-346389-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1205 19:56:56.330806  566878 status.go:174] checking status of multinode-346389-m03 ...
	I1205 19:56:56.331180  566878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:56:56.331229  566878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:56:56.347693  566878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45259
	I1205 19:56:56.348231  566878 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:56:56.348799  566878 main.go:141] libmachine: Using API Version  1
	I1205 19:56:56.348827  566878 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:56:56.349213  566878 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:56:56.349425  566878 main.go:141] libmachine: (multinode-346389-m03) Calling .GetState
	I1205 19:56:56.350907  566878 status.go:371] multinode-346389-m03 host status = "Stopped" (err=<nil>)
	I1205 19:56:56.350921  566878 status.go:384] host is not running, skipping remaining checks
	I1205 19:56:56.350926  566878 status.go:176] multinode-346389-m03 status: &{Name:multinode-346389-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-346389 node start m03 -v=7 --alsologtostderr: (40.304478332s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.95s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-346389 node delete m03: (1.762140179s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (178.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-346389 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1205 20:05:34.452560  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:05:51.383925  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:08:15.012592  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-346389 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m58.138857412s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-346389 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (178.69s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-346389
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-346389-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-346389-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (70.619295ms)

                                                
                                                
-- stdout --
	* [multinode-346389-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20052
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-346389-m02' is duplicated with machine name 'multinode-346389-m02' in profile 'multinode-346389'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-346389-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-346389-m03 --driver=kvm2  --container-runtime=crio: (42.903198938s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-346389
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-346389: exit status 80 (226.360945ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-346389 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-346389-m03 already exists in multinode-346389-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-346389-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.05s)

                                                
                                    
x
+
TestScheduledStopUnix (118.73s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-898791 --memory=2048 --driver=kvm2  --container-runtime=crio
E1205 20:12:58.079932  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-898791 --memory=2048 --driver=kvm2  --container-runtime=crio: (47.011908802s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-898791 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-898791 -n scheduled-stop-898791
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-898791 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1205 20:13:03.194812  538186 retry.go:31] will retry after 137.996µs: open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/scheduled-stop-898791/pid: no such file or directory
I1205 20:13:03.196004  538186 retry.go:31] will retry after 112.885µs: open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/scheduled-stop-898791/pid: no such file or directory
I1205 20:13:03.197160  538186 retry.go:31] will retry after 317.007µs: open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/scheduled-stop-898791/pid: no such file or directory
I1205 20:13:03.198325  538186 retry.go:31] will retry after 462.265µs: open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/scheduled-stop-898791/pid: no such file or directory
I1205 20:13:03.199473  538186 retry.go:31] will retry after 512.766µs: open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/scheduled-stop-898791/pid: no such file or directory
I1205 20:13:03.200622  538186 retry.go:31] will retry after 1.018383ms: open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/scheduled-stop-898791/pid: no such file or directory
I1205 20:13:03.201774  538186 retry.go:31] will retry after 773.564µs: open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/scheduled-stop-898791/pid: no such file or directory
I1205 20:13:03.202899  538186 retry.go:31] will retry after 1.188269ms: open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/scheduled-stop-898791/pid: no such file or directory
I1205 20:13:03.205107  538186 retry.go:31] will retry after 1.358484ms: open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/scheduled-stop-898791/pid: no such file or directory
I1205 20:13:03.207297  538186 retry.go:31] will retry after 4.202582ms: open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/scheduled-stop-898791/pid: no such file or directory
I1205 20:13:03.212514  538186 retry.go:31] will retry after 5.547342ms: open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/scheduled-stop-898791/pid: no such file or directory
I1205 20:13:03.218763  538186 retry.go:31] will retry after 8.778274ms: open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/scheduled-stop-898791/pid: no such file or directory
I1205 20:13:03.228005  538186 retry.go:31] will retry after 8.280693ms: open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/scheduled-stop-898791/pid: no such file or directory
I1205 20:13:03.237256  538186 retry.go:31] will retry after 21.009445ms: open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/scheduled-stop-898791/pid: no such file or directory
I1205 20:13:03.258497  538186 retry.go:31] will retry after 43.329553ms: open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/scheduled-stop-898791/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-898791 --cancel-scheduled
E1205 20:13:15.012170  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-898791 -n scheduled-stop-898791
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-898791
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-898791 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-898791
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-898791: exit status 7 (78.057113ms)

                                                
                                                
-- stdout --
	scheduled-stop-898791
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-898791 -n scheduled-stop-898791
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-898791 -n scheduled-stop-898791: exit status 7 (70.33802ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-898791" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-898791
--- PASS: TestScheduledStopUnix (118.73s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (233.52s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4130777992 start -p running-upgrade-617890 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4130777992 start -p running-upgrade-617890 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m4.359027623s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-617890 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-617890 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m45.527794844s)
helpers_test.go:175: Cleaning up "running-upgrade-617890" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-617890
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-617890: (1.192638722s)
--- PASS: TestRunningBinaryUpgrade (233.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.6s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (148.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2331275975 start -p stopped-upgrade-899594 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E1205 20:15:51.381092  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2331275975 start -p stopped-upgrade-899594 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m42.478290863s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2331275975 -p stopped-upgrade-899594 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2331275975 -p stopped-upgrade-899594 stop: (2.146555391s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-899594 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-899594 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.700488086s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (148.33s)

                                                
                                    
x
+
TestPause/serial/Start (63.5s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-594992 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-594992 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m3.501303966s)
--- PASS: TestPause/serial/Start (63.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-899594
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-739327 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-739327 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (77.517938ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-739327] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20052
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (49.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-739327 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-739327 --driver=kvm2  --container-runtime=crio: (49.603378571s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-739327 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (49.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-739327 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-739327 --no-kubernetes --driver=kvm2  --container-runtime=crio: (4.416227733s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-739327 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-739327 status -o json: exit status 2 (231.865476ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-739327","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-739327
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-739327: (1.062723516s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (5.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (27.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-739327 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-739327 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.811744136s)
--- PASS: TestNoKubernetes/serial/Start (27.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-739327 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-739327 "sudo systemctl is-active --quiet service kubelet": exit status 1 (220.899308ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-383287 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-383287 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (386.096723ms)

                                                
                                                
-- stdout --
	* [false-383287] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20052
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:18:11.572521  578413 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:18:11.572785  578413 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:18:11.572796  578413 out.go:358] Setting ErrFile to fd 2...
	I1205 20:18:11.572801  578413 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:18:11.573034  578413 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20052-530897/.minikube/bin
	I1205 20:18:11.573647  578413 out.go:352] Setting JSON to false
	I1205 20:18:11.574803  578413 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":10838,"bootTime":1733419054,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:18:11.574914  578413 start.go:139] virtualization: kvm guest
	I1205 20:18:11.577086  578413 out.go:177] * [false-383287] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:18:11.578991  578413 notify.go:220] Checking for updates...
	I1205 20:18:11.579008  578413 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 20:18:11.580537  578413 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:18:11.582115  578413 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20052-530897/kubeconfig
	I1205 20:18:11.583540  578413 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20052-530897/.minikube
	I1205 20:18:11.584775  578413 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:18:11.586320  578413 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:18:11.588495  578413 config.go:182] Loaded profile config "NoKubernetes-739327": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1205 20:18:11.588663  578413 config.go:182] Loaded profile config "force-systemd-flag-130544": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:18:11.588788  578413 config.go:182] Loaded profile config "kubernetes-upgrade-886958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 20:18:11.588930  578413 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:18:11.887061  578413 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 20:18:11.888581  578413 start.go:297] selected driver: kvm2
	I1205 20:18:11.888602  578413 start.go:901] validating driver "kvm2" against <nil>
	I1205 20:18:11.888618  578413 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:18:11.890911  578413 out.go:201] 
	W1205 20:18:11.892324  578413 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1205 20:18:11.893738  578413 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-383287 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-383287

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-383287

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-383287

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-383287

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-383287

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-383287

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-383287

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-383287

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-383287

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-383287

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-383287

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-383287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-383287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-383287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-383287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-383287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-383287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-383287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-383287" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-383287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-383287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-383287" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-383287

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-383287"

                                                
                                                
----------------------- debugLogs end: false-383287 [took: 3.218361225s] --------------------------------
helpers_test.go:175: Cleaning up "false-383287" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-383287
--- PASS: TestNetworkPlugins/group/false (3.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-739327
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-739327: (1.451585778s)
--- PASS: TestNoKubernetes/serial/Stop (1.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (45.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-739327 --driver=kvm2  --container-runtime=crio
E1205 20:18:15.012206  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-739327 --driver=kvm2  --container-runtime=crio: (45.740119803s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (45.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-739327 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-739327 "sudo systemctl is-active --quiet service kubelet": exit status 1 (207.295288ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (118.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-816185 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-816185 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m58.920714114s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (118.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (89.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-789000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1205 20:22:14.455290  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-789000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m29.32522111s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (89.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-816185 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [308c549a-f9e6-4b95-8539-595c641e71e2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [308c549a-f9e6-4b95-8539-595c641e71e2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004781972s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-816185 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-789000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [96262247-937e-4795-9107-73f1f2b6eeaa] Pending
helpers_test.go:344: "busybox" [96262247-937e-4795-9107-73f1f2b6eeaa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [96262247-937e-4795-9107-73f1f2b6eeaa] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003932626s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-789000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-816185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-816185 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-789000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-789000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.039779584s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-789000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-942599 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-942599 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m27.396910204s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-942599 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e2fbf81a-7842-4591-9538-b64348a8ae02] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e2fbf81a-7842-4591-9538-b64348a8ae02] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004441263s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-942599 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-942599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-942599 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (689.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-816185 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-816185 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (11m28.822197159s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-816185 -n no-preload-816185
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (689.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (607.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-789000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1205 20:25:51.381098  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-789000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (10m7.613459752s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-789000 -n embed-certs-789000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (607.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-386085 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-386085 --alsologtostderr -v=3: (3.431652916s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386085 -n old-k8s-version-386085
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-386085 -n old-k8s-version-386085: exit status 7 (72.975277ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-386085 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (519.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-942599 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1205 20:28:15.013538  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:29:38.082223  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:30:51.381729  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:33:15.012648  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:35:51.381540  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-942599 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (8m39.103022497s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-942599 -n default-k8s-diff-port-942599
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (519.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-024411 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-024411 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (48.67725009s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-024411 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-024411 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.14544035s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-024411 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-024411 --alsologtostderr -v=3: (10.595875134s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-024411 -n newest-cni-024411
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-024411 -n newest-cni-024411: exit status 7 (77.469264ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-024411 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (41.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-024411 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1205 20:50:51.381198  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-024411 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (41.633992547s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-024411 -n newest-cni-024411
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (41.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (98.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-383287 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-383287 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m38.366769884s)
--- PASS: TestNetworkPlugins/group/auto/Start (98.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-024411 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-024411 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-024411 -n newest-cni-024411
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-024411 -n newest-cni-024411: exit status 2 (260.826599ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-024411 -n newest-cni-024411
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-024411 -n newest-cni-024411: exit status 2 (248.710908ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-024411 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-024411 -n newest-cni-024411
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-024411 -n newest-cni-024411
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (57.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-383287 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-383287 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (57.7826563s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (57.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (98.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-383287 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-383287 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m38.525239368s)
--- PASS: TestNetworkPlugins/group/flannel/Start (98.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-383287 "pgrep -a kubelet"
I1205 20:52:33.196497  538186 config.go:182] Loaded profile config "enable-default-cni-383287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-383287 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8cvcn" [b13f0234-3b33-422b-b8fd-8d044bf61299] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8cvcn" [b13f0234-3b33-422b-b8fd-8d044bf61299] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.007023498s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-383287 "pgrep -a kubelet"
I1205 20:52:44.918286  538186 config.go:182] Loaded profile config "auto-383287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-383287 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pdr4b" [fe757f7f-f0ce-4946-a411-5eed5b5b3a81] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-pdr4b" [fe757f7f-f0ce-4946-a411-5eed5b5b3a81] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.006080461s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-383287 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-383287 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-383287 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-383287 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-383287 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-383287 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (62.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-383287 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E1205 20:53:04.842723  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:53:07.404554  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-383287 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m2.086250093s)
--- PASS: TestNetworkPlugins/group/bridge/Start (62.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (108.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-383287 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E1205 20:53:15.012552  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/functional-583983/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-383287 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m48.632674278s)
--- PASS: TestNetworkPlugins/group/calico/Start (108.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-2d4lm" [4fd5f690-3a58-433b-8dff-3f6b8967d125] Running
E1205 20:53:22.768379  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004282841s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-383287 "pgrep -a kubelet"
I1205 20:53:26.880178  538186 config.go:182] Loaded profile config "flannel-383287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-383287 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bs8hf" [b272f8a8-f7c6-4c46-82d1-b2f5e157e055] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-bs8hf" [b272f8a8-f7c6-4c46-82d1-b2f5e157e055] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004432781s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-383287 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-383287 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-383287 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (84.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-383287 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-383287 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m24.863267166s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (84.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (106.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-383287 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-383287 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m46.321443481s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (106.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-383287 "pgrep -a kubelet"
I1205 20:54:06.631686  538186 config.go:182] Loaded profile config "bridge-383287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-383287 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dfl8r" [3b8383c9-c0af-4976-ab3d-efb20bdd7d5d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-dfl8r" [3b8383c9-c0af-4976-ab3d-efb20bdd7d5d] Running
E1205 20:54:18.292186  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:54:18.298642  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:54:18.310211  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:54:18.331833  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:54:18.373434  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:54:18.455078  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:54:18.616692  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:54:18.938510  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:54:19.580080  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.003684618s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (10.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-383287 exec deployment/netcat -- nslookup kubernetes.default
E1205 20:54:20.861940  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:54:23.423996  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:54:24.212513  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:54:28.546156  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/old-k8s-version-386085/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Done: kubectl --context bridge-383287 exec deployment/netcat -- nslookup kubernetes.default: (10.156643584s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (10.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-383287 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-383287 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-5pfrk" [9596223e-7379-4c7c-8cfb-c7669fba9038] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004993292s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-383287 "pgrep -a kubelet"
I1205 20:55:08.364582  538186 config.go:182] Loaded profile config "calico-383287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-383287 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context calico-383287 replace --force -f testdata/netcat-deployment.yaml: (1.177303759s)
I1205 20:55:09.545567  538186 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1205 20:55:09.576881  538186 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6mrsb" [9460d13b-853d-42ce-ba0c-10d712b46e19] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6mrsb" [9460d13b-853d-42ce-ba0c-10d712b46e19] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005549659s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-383287 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-383287 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-383287 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-383287 "pgrep -a kubelet"
E1205 20:55:20.381120  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/client.crt: no such file or directory" logger="UnhandledError"
I1205 20:55:20.530577  538186 config.go:182] Loaded profile config "custom-flannel-383287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-383287 replace --force -f testdata/netcat-deployment.yaml
E1205 20:55:20.543093  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6f4hq" [0d976e7c-d7b4-4f6a-a6c6-5dee90056d47] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1205 20:55:20.865349  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:55:21.507259  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:55:22.788781  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:55:25.350306  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-6f4hq" [0d976e7c-d7b4-4f6a-a6c6-5dee90056d47] Running
E1205 20:55:30.473069  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/default-k8s-diff-port-942599/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004523626s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-383287 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-383287 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-383287 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-xgf8x" [ca797fbd-6ec5-4480-a1ab-135f3ce7acdb] Running
E1205 20:55:46.134646  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/no-preload-816185/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004844314s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-383287 "pgrep -a kubelet"
I1205 20:55:49.181916  538186 config.go:182] Loaded profile config "kindnet-383287": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-383287 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rlkfc" [5ec9f922-6a3a-4d62-96d1-9719cc011305] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1205 20:55:51.381443  538186 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20052-530897/.minikube/profiles/addons-396564/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-rlkfc" [5ec9f922-6a3a-4d62-96d1-9719cc011305] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003437251s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-383287 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-383287 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-383287 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    

Test skip (39/311)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.2/cached-images 0
15 TestDownloadOnly/v1.31.2/binaries 0
16 TestDownloadOnly/v1.31.2/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.3
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
134 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
135 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
136 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
137 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
138 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
175 TestImageBuild 0
202 TestKicCustomNetwork 0
203 TestKicExistingNetwork 0
204 TestKicCustomSubnet 0
205 TestKicStaticIP 0
237 TestChangeNoneUser 0
240 TestScheduledStopWindows 0
242 TestSkaffold 0
244 TestInsufficientStorage 0
248 TestMissingContainerUpgrade 0
256 TestStartStop/group/disable-driver-mounts 0.15
269 TestNetworkPlugins/group/kubenet 3.45
281 TestNetworkPlugins/group/cilium 4.06
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-396564 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-242147" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-242147
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-383287 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-383287

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-383287

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-383287

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-383287

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-383287

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-383287

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-383287

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-383287

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-383287

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-383287

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-383287

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-383287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-383287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-383287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-383287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-383287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-383287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-383287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-383287" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-383287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-383287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-383287" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-383287

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-383287"

                                                
                                                
----------------------- debugLogs end: kubenet-383287 [took: 3.294768876s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-383287" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-383287
--- SKIP: TestNetworkPlugins/group/kubenet (3.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-383287 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-383287

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-383287

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-383287

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-383287

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-383287

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-383287

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-383287

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-383287

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-383287

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-383287

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-383287

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-383287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-383287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-383287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-383287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-383287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-383287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-383287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-383287" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-383287

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-383287

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-383287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-383287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-383287

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-383287

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-383287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-383287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-383287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-383287" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-383287" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-383287

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-383287" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-383287"

                                                
                                                
----------------------- debugLogs end: cilium-383287 [took: 3.909015817s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-383287" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-383287
--- SKIP: TestNetworkPlugins/group/cilium (4.06s)

                                                
                                    
Copied to clipboard